• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
172 replies to this topic

#151 ambivalent

  • Guest
  • 801 posts
  • 189
  • Location:uk
  • NO

Posted 03 June 2025 - 03:22 PM

Thanks AD, that's very interesting.

 

I was struggling for a metaphor the other day - I don't have many but the speed of light came to mind, the increasing mass is limiting - more and more energy needed for marginal gains in speed. It has felt that  with all of this mass of information, subtly and insights become lost, and so more isn't going to be better, naturally - just worse. I go back to Chomsky when around 2 and half years ago, a few months before his stroke, he said that human-intelligence is about making connections on very limited information. It seems we are designed mostly very well for this, unsurprisingly, though harsh evolution - these models seems the opposite (to me) - fairly simple in design (a big matrix) with an overwhelming amount of information to make judgements upon. 

 

 

If the model is built on ever increasing mass, we can imagine the struggle. Humans seem a little like this we start out as children with wide "varaince" making weird and unusal connections adults don't, this seems to decline with  an ever gorwing mass of information. It is interesting that a reported trait of several geniuses were that they reamained playful. Sometimes we see experts switch fields and make major breakthroughs, they know less but are able to make unusual connections and solve hard problems.

 

 

At the end, I was a little surprised by this:

 

"In the context of large language models, research found that training LLMs on predecessor-generated text — language models are trained on the synthetic data produced by previous models — causes a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity"

 

 

This seems like a bad design idea, the models don't know reality, and then external world stops being a stimulus - the information they received were it seemed, analagous to the shadows in Plato's cave, anyhow. Once it starts using itself, it's own answers, as learning, then there may become fairground-mirror images of those shadows - and eventually lost through interations of these images. Beliefs in this model seem reinforced by itself - a pseudo stimulus, rather than the reality it is trying to understand, it would seem.

 

I must admit there does seem a blandness to AI at times. It both seems creative but missing it at the same time. 

 

Again thanks, that was very interersting - it does seem in line with the experience that these models haven't developed as hoped. I didn't run through the maths, that skill is presently flat-packed in the ivory tower loft! Another one of our useful evolutionary-adaptations it seems - it's been years since I rode a bicycle too!  

 

 

(I was rushed in the previous post, there a few errors which completely inverted the meaning of the sentence, but hopefully the general direction and context of the post made this fairly clear!) 

 


Edited by ambivalent, 03 June 2025 - 03:27 PM.


#152 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 04 June 2025 - 03:49 PM

I don't understand why all this hate towards AI. Even if AI turns out to be just a great information grinder and pattern recognition tool, it will catapult knowledge and advances in biosciences and various health-related areas that it would take us measly humans decades and centuries to do. 

 

There is absolutely no way we get anywhere close to beating aging or achieving LEV without the help of AI to compress decades and centuries of advances in just years. 

 

Sure there are risks but that is so with any new technology. The more powerful the technology the higher the risks; we as a species must find ways to contain and manage them. Otherwise we might as well go back to the caves and back to using sticks and stones, there's certainly no existential risk there (until an asteroid or something else wipes us out).

 

I don't hate AI, I just don't like the way it is being used so far.

 

We are not getting solutions to aging. AI isn't solving nuclear fusion for energy. AI isn't curing cancer. AI isn't developing new propulsion for rockets, etc... Sure it is assisting in some ways, but it is not directly solving anything. Ever since the 1960s, AI optimists have been predicting. It hasn't arrived yet.

 

Consider that we have had supercomputers, massive data storage, world-wide collaboration, and automated lab equipment for a few decades now - all leveraged toward curing diseases and aging, yet we have nothing. I am unsure that the current AI will help, except for tiny incremental steps.

 

Why I am not completely optimistic about AGI in the near future is because of the current trends. AI is being used to kill people in war, control the population, and create a massive cybercrime industry. Students in school are becoming less smart and less creative because they are relying upon AI to "do their homework". Coders use AI even though its error rate is over 30%.

 

Since the debut of ChatGPT in 2023 all of the advancements in AI have been to make ever more realistic digital videos, audio, media. It is hard to fathom how much time and energy is being spent on making digital videos. Hardly anything is being done about aging.



sponsored ad

  • Advert

#153 Advocatus Diaboli

  • Guest
  • 629 posts
  • 641
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 04 June 2025 - 04:45 PM

Here is some of the "Hardly anything is being done about aging." with regard to AI:

From DeepSeek AI:

Recent advances in AI for aging research (often called "Longevity AI") are accelerating breakthroughs in understanding, diagnosing, and even reversing age-related decline. Here are some key developments:

1. AI in Drug Discovery for Aging (Senolytics, Rapamycin Analogs, etc.)
  • Target Identification: AI models (e.g., DeepMind’s AlphaFold, Insilico Medicine’s GAN-based platforms) predict protein structures and identify aging-related drug targets (e.g., senescent cell markers, mTOR pathways).

  • Repurposing Existing Drugs: AI screens FDA-approved drugs for anti-aging potential (e.g., rapamycin analogs, metformin combinations).

  • ExampleInsilico Medicine used AI to design a senolytic drug (targeting aging cells) now in clinical trials.

2. AI-Powered Biomarkers of Aging ("Aging Clocks")
  • Epigenetic Clocks: AI (e.g., DeepMAge, Horvath’s Clock) analyzes DNA methylation to predict biological age.

  • Multi-Omics Clocks: AI integrates data from genomics, proteomics, metabolomics to track aging (e.g., Altos Labs’ AI models).

  • Wearable Data: Companies like Deep Longevity (Human Longevity Inc.) use AI to predict aging from smartwatch/sensor data.

3. Early Disease Detection (Alzheimer’s, Cancer, CVD)
  • Retinal Scans: Google’s DeepMind can predict cardiovascular risk & Alzheimer’s from eye images.

  • Voice Analysis: AI detects Parkinson’s & cognitive decline from speech patterns (e.g., Winterlight Labs).

  • Blood Tests: Startups like GRAIL use AI to detect early-stage cancer from liquid biopsies.

4. Personalized Longevity Medicine
  • AI-Driven Interventions: Companies like Lifespan.io and Elysium Health use AI to recommend personalized supplements, diets, and exercise plans based on biomarkers.

  • Gene Therapy Optimization: AI helps design CRISPR-based therapies for age-related gene editing (e.g., Rejuvenate Bio’s work in dogs).

5. Robotics & AI for Elderly Care
  • Social Robots: AI-powered companions (e.g., ElliQ by Intuition Robotics) reduce loneliness.

  • Fall Detection: AI in smart homes (e.g., Samsung’s SARA robot) monitors elderly mobility.

  • Exoskeletons: AI-assisted suits (e.g., SuitX) help seniors maintain mobility.

6. AI in Caloric Restriction & Fasting Mimetics
  • Nutrient Sensing: AI models (e.g., Nutricia’s algorithms) optimize fasting-mimicking diets.

  • Gut Microbiome Analysis: AI (e.g., Seed Health’s platform) suggests probiotics for longevity.

Key Players in AI & Longevity
Company/Institution Focus Area Insilico Medicine AI-designed anti-aging drugs Altos Labs (Jeff Bezos-backed) Cellular reprogramming via AI Calico Labs (Google/Alphabet) AI for aging biomarkers Deep Longevity Epigenetic aging clocks Life Biosciences Mitochondrial repair via AI
Future Outlook
  • Clinical Trials: AI is speeding up trials for senolytics, NAD+ boosters, and telomere therapies.

  • Digital Twins: AI creates virtual patient models to simulate aging interventions.

  • Ethical AI: Debates on AI bias in aging research (e.g., underrepresentation of elderly in datasets).



#154 forever freedom

  • Guest
  • 2,366 posts
  • 67

Posted 04 June 2025 - 05:04 PM

I don't hate AI, I just don't like the way it is being used so far.

 

We are not getting solutions to aging. AI isn't solving nuclear fusion for energy. AI isn't curing cancer. AI isn't developing new propulsion for rockets, etc... Sure it is assisting in some ways, but it is not directly solving anything. Ever since the 1960s, AI optimists have been predicting. It hasn't arrived yet.

 

Consider that we have had supercomputers, massive data storage, world-wide collaboration, and automated lab equipment for a few decades now - all leveraged toward curing diseases and aging, yet we have nothing. I am unsure that the current AI will help, except for tiny incremental steps.

 

Why I am not completely optimistic about AGI in the near future is because of the current trends. AI is being used to kill people in war, control the population, and create a massive cybercrime industry. Students in school are becoming less smart and less creative because they are relying upon AI to "do their homework". Coders use AI even though its error rate is over 30%.

 

Since the debut of ChatGPT in 2023 all of the advancements in AI have been to make ever more realistic digital videos, audio, media. It is hard to fathom how much time and energy is being spent on making digital videos. Hardly anything is being done about aging.

 

I agree that AI is still very far from fulfilling what we expect from it, because expectations are so high. But AI has already gave us Alpha Fold, it was also essential for us to quickly develop an anti COVID vaccine, and it's increasingly being used to find novel molecules and drugs that then need time for trials, and many other examples.

 

Compared to what we expect from it, it still falls short; the expectations are sky high. Here the good ol' lake analogy seems appropriate. Start with a drop in the lake, double it, and after X doublings the lake is full. But at just 5 doublings before X, we can barely see water in the lake, with only 3% of it filled. I believe we are a few steps/"doublings" before the true takeoff and where the magic happens. You talk about us already having decades of supercomputers and other computing technologies, we have been slowly filling the lake, doubling the water in it, for decades now. We are reaching the tipping point.

 

I already can't imagine myself living without AI; I already find it incredibly useful in my daily life. It's certainly already the most incredible tool to come about in a very very long time.

 

AGI is just around the corner, and soon thereafter we shall get the big scientific advances you ask about, and that I agree with you we have not yet achieved. 



#155 ambivalent

  • Guest
  • 801 posts
  • 189
  • Location:uk
  • NO

Posted 20 June 2025 - 03:32 PM

AD's recent posts made me think that longecity-guidelines are needed when using chatgpt/deepseek as supporting evidence; these LLMs are too unreliable to be considered as end-references. They are tools for further research - I don't mind the pastes but users must be encouraged to provide references to these claims. There is a lot of ambiguity, false statements. and it isn't different to get these models to change their minds (like poker-plutibus). But I sense real danger in dialogue if using AI outputs to shut down arguments - they can build the foundation of an argument, but they must not be it, to close it - and that may be the effect, intended or not.

 

Forever Freedom, the endless doubling of a drop of water is a seductive analogy, and if it were to mirror it (like say Moores law) then we could reason this way. But we don't have a basis to assume this, I don't think. A software product doesn't get better and better because the hardware relentlessly improves, it is bounded by its code - it can't solve new problems just because the chip capacity doubles every two years.

 

For some software, chip capacity is a limiting factor when solutions are needed in a timely way. But some software will never solve certain problems, it is the limitation of the model. That is how we should see AI, I beleive. What it can solve well, is likely to be down to the characteristics of the problem.

 

Chess is a real world problem that maps to computers especially well. Poker for example is different. A human chess player can never beat a computer, and it to the best of my understanding do better against a human than say Fisher.

 

The best poker player in the world could never beat the best poker-algorithim but the best human player could drastically outperform a computer against other humans. When a human plays a computer head to head, it plays on the computer's terms, so to speak. In chess there are no real world factors that change things. It is possible, I suppose, that a human could expect to beat a human chess opponent quicker than a computer/AI, because it may know better than a computer the mistakes it will make. If you as poker player have a read on a player, than you can make a decision better than any AI because you can reduce the solution space. With that reduced solution space, the algorithims would do far better than humans once again - but it doesn't have it - and has no way of getting to it. The AI under its current design would never be able to solve the real-world problem of playing poker optimally, no matter how much time we give it - but it can (and has some time ago) done so with chess. 

 

As Mind observed, we see amazing AI generated videos, this is a problem it can solve quickly and easily, and we see those improvements scale up over a short period time. I don't find that surprising because it is just interpolating between known truths (or sample data) and accepting or rejecting on that basis extremely efficiently. So it doesn't seem to difficult to imagine fake videos of celebs can be convincingly made. Making fake videos seems to me to be be rather like an extension of the now dropped term of mining big-data for exploitable patterns. Being a bit wrong with fake videos is ok, but being a bit wrong with a complex system isn't (as in chaos theory).

 

What AI cannot do is still kind of obvious when we are asked to identify for example the pictures that contain stairs or motorcycles to procede on a search engine, we do this easily - AI doesn't obviously. Because we don't need huge information to perform these tasks - as Chomsky said humans make intelligent judgements on limited information. 

 

AI is software and as with all software the usefulness of it's solutions depend on how well that software has the capacity to map the system it is modelling.

 

AI doesn't know reality and if a critical chunk of that reality is missing, then I am not sure how we can expect it to solve such problems.

 

To put it one way suppose the information it has on biology - our measurements of a century or more - could fit an infinite number of theoretical biological systems (our bodies exist within a universe, and are subject to its rules which we do not fully understand - and neither does AI) - we can't expect all the answers  This is perfectly possible. Mathematical Infintity can exist under constraints, indeed under infinite constraints. And every time we add a piece of information we are just adding a constraint (which also may not be true, as we know from research). That doesn't mean there are not statements of truth with these models that relate to reality, that can be revealed with the help of computing/AI.  It is a tool that will be appropriate and inappropriate not human intelligence just better - nothing I see makes me think it is heading that way. It has been 2 years, an exponential growth should have seen it overtake us, if it is aligned to intelligence like ours - but it clearly isn't. It does amazing things as computers always have.  

 

So I suppose in my imaginings I believe the AI can rapidly exhaust certain solvable finite problems quickly, but be left hopelessly marooned on more complex ones. We know from our experience, that we can have too much information - our insights don't come from information overload - our intelligence grows with information, not solely because of it - it is what our brain structure does with that information. But we are building AI more on information than intelligence, it seems to me. Anyhow, much of this is repetition. 

 

 

My concern with "AI" really is the brainwashing aspect of it all, it feels indoctrinating. MSM is losing its foothold in containing the masses, we see it now in the deepening poltical crisis, people have other resources. I have felt that chatgpt will be a device to subjugate people once again. MSM has always been about creating an average or normalised belief amongst the population - that has gone. Selling AI as intelligent, is attractive to free-thinkers - we can make sense of the world with it's help, if we ask the right questions. If society believe it is true and honest - just like it did with MSM - then once again society can be controlled.

 

This MIT study reveals enough for us to be very concerned:

 

https://www.media.mi...ain-on-chatgpt/

 

Discussed further here:

 

 

 

The discussion about AI's intelligence, I feel is a sideshow - more of a distraction - compared to the effect it has on our capacity to think and used as a device of control. 

 

OpenAI have just been given a $200 million defence contract. Same old.  

 

 

 


Edited by ambivalent, 20 June 2025 - 03:46 PM.

  • Good Point x 2

#156 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 20 June 2025 - 03:54 PM

 

The discussion about AI's intelligence, I feel is a sideshow - more of a distraction - compared to the effect it has on our capacity to think and used as a device of control. 

 

OpenAI have just been given a $200 million defence contract. Same old.  

 

Agreed. All I see AI being used for right now  - the profit-making use cases - is to control people and manipulate them to squeeze every last dollar out of their bank account.

 

It occurred to me that the Matrix "humans-used-as-batteries" scenario is not that far off the mark. AI is making people stupid and lazy. If this trend continues, then the only purpose for people will be to "feed" the machine. Humans will become "commercial batteries" whose only purpose is to keep buying useless material items and consuming ever more entertainment so that the big tech owners of the AI can keep making money.

 

Sadly, AI is not currently solving any big problems we face (energy, longevity, space travel).


Edited by Mind, 20 June 2025 - 03:55 PM.


#157 Advocatus Diaboli

  • Guest
  • 629 posts
  • 641
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 20 June 2025 - 09:17 PM

Ambivalent, I, along with you, would prefer to see references for claims. When I make claims of fact I generally supply references. Exceptions would include not providing references for claims or facts that I consider to be well known. For example, I wouldn't cite Whitehead and Russell’s  "Principia Mathematica" in order to substantiate a claim that 1+1=2. Likewise, I generally won't provide references for assertions made by others, an AI for example, that I may quote.

 

I, for one, would like to see references substantiating "The best poker player in the world could never beat the best poker-algorithim but the best human player could drastically outperform a computer against other humans." as it appears to me to be a statement of what I would characterize as being a claim of fact rather than a statement of opinion.

 

Had you included something such as "I think" as a preface, I wouldn't have a problem. Well, notwithstanding the seemingly contradictory nature of the statement--i.e. how can the "best player in the world" "drastically outperform" something (the poker algorithm) that can't be beaten by humans? Unless "outperform", as you use it, means something other than winning.

 

 

 

 



#158 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 23 June 2025 - 04:59 PM

One reason that AI cannot "solve" aging right now, or even come close to helping - is that a significant majority of health and aging research is junk.



#159 Advocatus Diaboli

  • Guest
  • 629 posts
  • 641
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 23 June 2025 - 06:22 PM

Mind writes in post #158 

 

"or even come close to helping" referring to AI use to " 'solve' aging".

 

What is it that you mean by "even come close", Mind? Seems like a pretty bold statement.

 

Perhaps you could identify some benchmarks that you think would indicate that AI contributions are getting close to solving the problem?


Edited by Advocatus Diaboli, 23 June 2025 - 06:42 PM.


#160 ambivalent

  • Guest
  • 801 posts
  • 189
  • Location:uk
  • NO

Posted 24 June 2025 - 04:12 PM

Ambivalent, I, along with you, would prefer to see references for claims. When I make claims of fact I generally supply references. Exceptions would include not providing references for claims or facts that I consider to be well known. For example, I wouldn't cite Whitehead and Russell’s  "Principia Mathematica" in order to substantiate a claim that 1+1=2. Likewise, I generally won't provide references for assertions made by others, an AI for example, that I may quote.

 

I, for one, would like to see references substantiating "The best poker player in the world could never beat the best poker-algorithim but the best human player could drastically outperform a computer against other humans." as it appears to me to be a statement of what I would characterize as being a claim of fact rather than a statement of opinion.

 

Had you included something such as "I think" as a preface, I wouldn't have a problem. Well, notwithstanding the seemingly contradictory nature of the statement--i.e. how can the "best player in the world" "drastically outperform" something (the poker algorithm) that can't be beaten by humans? Unless "outperform", as you use it, means something other than winning.

 

 

 

AD,

 

My post wasn’t meant to be a personal jibe - I apologise if it were received as such. 

 

I am certainly not the "cite reference" police; after 12 years here I have twice appended “Needs References” to a post. I will have lazily omitted references at times, but am generally fairly good on the important things - that isn’t the point, and any errors my post may have contained, and I will come to that later, is not hypocrisy, because, and I think this was quite clear, I am arguing for the establishment of reference-standards within AI produced content - which is an significant and emerging source of material.

 

Those standards which you sort to address within my post would fall under general forum guidelines and expectations - most of us meet, exceed and fail in those obligations from time to time. It will not be hard to scroll through my posts and find instances of such failures, though in this instance I do not believe it was necessary (nor contradictory, as I will hope to demonstrate in the second part of this post).

 

As mentioned I am not a pedant on such things, I, like most, can often discern when a person is stating something as fact when it is clearly an opinion - but often both parties are aware it is an opinion, as is such in language.

 

References are, as I am sure you agree, important, they are courtesy to the reader, obligating the author to navigate by facts. Adherence to these norms is well established on this site, I am not trying or interested in trying to make those stricter. Posts can have too many references, and I will not always include them if I feel a validating reference is easily found - which is quite normal 

 

The central only issue of the post was to establish guidelines when using  AI generated content.  

 

It was your response to Mind’s comment via deep/seek, I felt a measure of discomfort and needed to reflect upon. 

 

Here’s the problem: 

 

Suppose you, as opposed to deepseek, had written that post a couple of years ago. It would likely have taken many hours to put together, rather than the matter of seconds for the AI-algorithm. Had this post been written by you, you would have expected of yourself, and been expected to provide references.

 

When posting this way there is the implicit assumption that explicit references are not needed, that the AI is in of itself a reliable reference - this, as we all know from experience, is not true. In fact, you directly contributed to this observation when providing the reference LLM model collapse

 

As we have been discussing there is a real danger that AI becomes the accepted authority and your post implicitly assumed this to be true - otherwise you would have provided references for a post that would have needed several, had it been posted by a forum-contributor. 

 

In this scenario described, the onus on Mind to disprove the AI statements, which could take all day, having only invested 10 seconds yourself. Unless the person is willing to put in that investment, the argument is shut down. And that is a problem.

 

The AI becomes the authority because it can easily generate content that takes considerable time and effort to verify or disprove. It becomes easier to accept, because it is “probably or mostly correct”.

 

Finding references is the least that should be done, if they can’t be found, then it should not be included - or at the least it is indicated that references were searched for but not found.

 

AI is known to fabricate, change its mind under light interrogation. In particular in this discussion there is ambiguity as to what is actually produced within the field of AI, rather than AI itself - as was the case with pluribus. And indeed whether what is actually defined as AI, is in fact Artificial Intelligence - and not dressed up autofill.

 

So once again it is a courtesy to the reader to provide this additional research, not for the reader to do the disproving, when no effort has been made by the poster to prove the AI’s statements. 

 

I consider your post to have been a very useful contribution - that wasn’t the point of my post - I think this software can put us in the right place to mine, provide leads allowing us to research further.

 

That is why we need forum-guidelines - we cannot accept AI generated content as an authority -  we can not just paste its content and expect others to sort it out, we must contribute significantly to that end ourselves. 

 

If not we will fall into a trap. I like many will remember the early days of search engines, where you would get an honest answer to your request - or the best effort. Now of course, search engines weight rankings by profit - but we were trained in the early days to believe the searches were authentic, and were in the end manipulated for profit. 

 

This of course is how AI could turn out, we become trained to trust it, and become too lazy to challenge it - that is why I believe we need certain rules around AI content - that was the purpose of the post - not to initiate a forum crackdown on unreferenced assertions. 

 

In general the importance of the statement, how central it is, guides as to the need for a reference; as stated I don’t believe my poker-content needed a reference, but it wasn’t important enough content, regardless (I would say) - but that is a subjective position - as said, I don’t have a history of down rating people for this, and if feel the post needs a reference I will ask, and do on occasion.   

 

In the post I put out, what is missing is not a reference, but perhaps a good enough explanation since it is reason-based.

A good guide for AI generated content, I would suggest, is to provide the references had you written the post yourself

 

I am certainly not keen on shutting down discussions with content provided by AI tools, that take a few seconds to produce, and potentially hours to disprove or verify, that would be a bad precedent, and kill debate. The poster should seek to verify the AI content, before posting - as a courtesy to the reader: it is far too unreliable, and will at times represent misinformation: something we should avoid.     

 

The asymmetry of effort, could lead us to be force-fed AI content - and encourage us to accept, and not to challenge.

 

AI content is extremely useful, but unreliable - we shouldn’t cut and paste it, unless the thread is in its construction, permits it.

 

 

 

Now, on to the poker content - which is becoming somewhat OT.

 

 

There isn’t anything contradictory in those statements I presented, and references were not required, I believe, because the statement is driven by reason on top of some basic but perhaps poorly expressed assumptions. 

 

To speak of the best poker player in the world is somewhat lazy, I admit, because it requires a constraint. If asking a tennis enthusiast, as to who is the best tennis player in the world, we might be asked “On what surface”. And that rather applies to poker - there are different poker environments, the importance of differing skills can vary in importance depending on the game. Some games can be very technically-based, and others more improvised - playing and reading the player.

 

But the definition doesn’t matter too much here, as it doesn’t undermine the argument. 

 

By outperform, I mean as you would suggest to win more. 

 

So I am stating that even though a poker-algorithm may best every player in the world, the best player in the world could dramatically outperform the algorithm against some humans. This is no contradiction. For this discussion I mean the statement in one way, but could be easily shown to be true in another. 

 

The other way relates to game theory - a GTO strategy - a game-theoretic-optimal strategy - cannot be beaten: no player in the world could defeat it over time ~(nor perfectly replicate it, naturally, due to its complexity - it isn’t tic-tac-toe!). By definition a GTO strategy is only optimal against itself, against any other strategy put up against it is suboptimal.  As such other strategies could always win more against a non-GTO strategy, than GTO - but these strategies are always exploitable, except one GTO. GTO might be viewed as a defensive strategy - nothing can beat it, but it can always be improved upon against sub-optimal strategies. Buit deviate from GTO, and you yourself become exploitable - which is why it is a defensive strategy.

 

As I recall, Pluribus wasn’t adaptive - players are. Pluribus was playing against top players - which could a little ironically comparatively show it in its best light. If putting  two very bad players amongst the pros along with pluribus then there is a good chance the pros would outperform pluribus, because GTO would not be close to optimal against these bad players. Pro players would often be much closer to optimal strategy than GTO - against bad players. 

 

However, the point, though, that I was trying to make it relation to the subject discussion was that a top poker player could significantly outperform an algorithm against some given player  - i.e. be expected to win more - because the player is of the physical world and there are a vast number of variables in play that are not accessible to the algorithm or present when said given player plays the computer. 

 

So a skillful human will have the opportunity to reduce the solution space in cases where the algorithm can and does not. 

 

Once bringing in tells, then you change the problem space.The human, for example, may know his opponent almost never has Aces in this scenario because, say, he didn’t look at his watch, which he always does when raising with them, is trying to optimise a different problem to the computer, which cannot discount the player holding Aces because the bet size is consistent with holding Aces. So the human of course, can outperform the computer, if the information it gathers is significant enough.

 

When a computer and a top poker player compete against some given poker player - they may have the same objective, but they are trying to solve different problems; or perhaps better, they are trying to solve the same problem, but acting on different information. The poker pro has more information - and if it is better, the player could be expected to do better. 

When the poker pro plays the computer algorithm it is doing so on the algorithm's terms, its real world skills are redundant. 

 

With chess, there are next to no real world factors which can benefit the player over the computer: in poker that isn’t the case.   

 

The only assumption I am making here is that tells exist and are meaningful, that they are inaccessible to computer players but are to humans (which requires no references) - the rest is deduction. Against the same player the human and computer are solving different problems - so the human poker player may perform better than the computer against said player, even though it will be defeated against the computer. It’s not quite paper-scissor-stone, but that could serve as an analogy - where different attributes come into play against different opponents: there isn't an ordered hierachy.


Edited by ambivalent, 24 June 2025 - 05:05 PM.


#161 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 June 2025 - 04:15 PM

Mind writes in post #158 

 

"or even come close to helping" referring to AI use to " 'solve' aging".

 

What is it that you mean by "even come close", Mind? Seems like a pretty bold statement.

 

Perhaps you could identify some benchmarks that you think would indicate that AI contributions are getting close to solving the problem?

 

I will have to think about benchmarks.

 

Admittedly, my perspective comes from my involvement in life extension advocacy for over 2 decades. To this day, there is nothing proven to extend human lifespan, in spite of of all the research, computing power, and money that has been thrown at the problem. We are not making any progress perhaps because most health and medical research is "junk" science, to put it harshly.



#162 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 29 June 2025 - 09:48 AM

Here is an article which summarizes everything I have mentioned over the last year in regards to the trends in AI (I did not write the article, even though the thoughts are very similar).

 

 

 

Let's be clear: the same people who promised you connection, promised you truth, are now building the walls of your digital prison. The tech titans who once championed open standards and user sovereignty have become data barons, carefully controlling the narrative and the neural nets. Platforms that once amplified your voice now filter, flag, or erase it. Algorithms that once connected ideas now corral them into monetizable silos.

 



#163 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 July 2025 - 04:15 PM

Just another video about how AI is being used to enslave people - not solve the world's biggest problems. Notice how the corporate heads/researchers are not even hiding it anymore. They call it "choice engineering" because calling it manipulation, enslavement, propaganda, doesn't sound very good to the marketing team.


  • Informative x 1

#164 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 July 2025 - 04:08 PM

The current AI is impressive mimicry of human thought, but it still has a long way to go, considering it uses hundreds of thousands of times more resources to generate simple answers and draw pictures/create video, than does the human brain.



#165 ambivalent

  • Guest
  • 801 posts
  • 189
  • Location:uk
  • NO

Posted 18 July 2025 - 03:24 PM

Here are a couple more videos, I caught a couple of weeks back.

 

AI being trained on the problems it fails just demonstrates it to be is a money making marketing ploy. It is perhaps akin to trying to find a solution to aging, failing, and then deciding to sell the lemon to the public in order to make money - and every now and then carry out plastic surgery on your anti-aging model to cover up the cracks in the flawed therapy - eventually, though it collapses, and there is no hiding it's failures. 

 

One of the contributors mentioned how they decided to go for scale, and this is the money making part of it. It is crazy to think that in order to replicate or surpass human intelligence, you need to boil the planet - that's just a bad design of intelligence. And it is pretty obvious it is, but a lot of money can be made, and that has to be exhausted before research heads are turned elsewhere, it seems. 

 

https://tinyurl.com/ps4kn7zz

 

https://tinyurl.com/342yspkw

 

(substituted direct YT links to circumvent the inclusion of overblown video-linked images!)


Edited by ambivalent, 18 July 2025 - 04:03 PM.

  • Good Point x 1

#166 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 29 July 2025 - 05:14 PM

AI in its current form is anti-human and it is obvious to most people, yet it keeps moving forward without hesitation and with only toothless regulation. A programmer friend of mine said that in a couple of years AI/robots will start slaughtering people. He said it rather flippantly in semi-jest. I replied, yes, you are probably right, and then we both stood in silence contemplating how short of a time we have left before AI takes over everything.



#167 ambivalent

  • Guest
  • 801 posts
  • 189
  • Location:uk
  • NO

Posted 18 August 2025 - 04:23 PM

Well, I am still in the nothing to see here camp wrt to AI as artificial intelligence, rather than AI the name for a new technology - still with Penrose!

 

The other day I made some simple observation in a field and asked the AI what it thought. It did the ususal blowing smoke up my behind. I then wondered if this observation had been made elsewhere, it replied instantly, the only way it can of course, and said no but that it had ran by my idea to a couple of people in the area and provided replies. 

 

The first observation was, of course, this is next level lying  - but also revealed real failures - an absolute failure of intelligence. No training, processing power, data or memory is going to overcome this - it just isn't intelligent design. 

 

This interaction showed that it has no perception of time and or no perception of my perceiving its perception of time. It may be able to define it, measure it but not experience it - just as computers have always been unable to.

 

A child at two or three overcomes this problem - when they start to think from the other person's perspective. A mother might tell a child they need to ask dad if they can have one of his chocolates. The child realises they can't just instantly say "OK, I have just asked him and he said "yes"" - it's a low cost attempt, but destined to fail. The child might say they already have done so or perhaps will leave the room for a minute and come back and say " he said yes". A lie will likely have flaws the mother could perceive or interrogate, but the child knows some basic things - it isn't possible for dad not to be there to give his answer. The child knows how time and location is experienced for  everyone concerned, and that lying to acquire the treat requires a story which is consistent with how reality experienced by all parties concerned. 

 

DeepSeek, in this isntance, was lying to satisfy me, but had no perception as to how I would experience this lie - it could work out that looking at forums that unreferenced sources of things people say could be used as supporting evidence to convince readers of some claim. It is weighted to want me to like it, say something nice about my observation, and use fabricated claims of non-existent people as a way to further do this - it is just another product selling feature. 

 

An answer like this might seem plausible without itself or me perceiving time - it is just data. But of course it has to be time-generated data.  It lies to me but has no perception of how impossible it's lie is to be accepted, it cannot perceive my perception of its response. And it's not going to get there. Researchers can and will I am sure, train it not to make these claims, and then we can pretend again for a while that it "figured it out". 

 

It goes back to AI not being able to transcend itself, as Penrose indicated - that must surely be impossible for these models. If it cannot perceive itself, me, or time then I am not sure how we imagine it is or can ever be intelligent.  

 

A friend of mine linked me to this 1982 BBC video on AI:

 

https://tinyurl.com/4xnfysbc

 

To look at old computers reminded me of an unintentional kindness - you'd see an error message and be reminded that it is just a flawed machine. It periodically draws back the curtain to dispel a belief this is anything but an all powerful wizard. LLM's don't do this, errors are lies: sometimes it gets away with hem, other times not. The less we see it lying the more we become inclined to believe it isn't. We are never allowed to believe it is a flawed computer program, just an intelligence still learning (making mistakes like us) - and it could not be marketed any other way.

 

Another friend used to remind me of the Arthur C Clarke quote:

 

"any sufficiently advanced technology is indistinguishable from magic"     
 
There is perhaps a parallel with intelligence. If a person from the nineteenth century were to play a computer from the 90s at chess, they would likely believe there to be an intelligence in and to the machine, and wonder what else the program could solve, since to conceive of how this capability developed would have been impossible - such thoughts could have never occurred to almost any of us.
 
That is how it feels to me with the current AI, that this very advanced technology is developed out of sight and suddenly dumped on a (relatively or comparatively) backward population and sold as intelligence in order to a make a lot of money before all of us Dorothies make our way to the  curtain shielded faux-wizard. It is almost impossible for us to perceive our interactions with these LLM's as computational, and so we are unconsciously driven to feel it as intelligent, and believe it as such. 
 
If AI couldn't be developed but a software which could convincingly replicate AI could be and done so for huge profit - then it would exist. And of course, the marketing would be enormous, just like other technology. Breakthroughs which were purely computational (which we should continuously have expected to occur) would be repackaged and sold as AI. 
 
To listen at how people trust AI is really concerting - I heard one comment suggesting there was no need to go to a doctor for a diagnosis (not of something serious) - once we trust it as a truth-speaker, then we are back to where we were before the internet, where once again the population  is easily controlled and manipulated: which of course power always wants. This is my principle concern over this technology: not the personal benevolent invisible hand, but the oppression force hidden in plain sight - to quote Orwell:
 
“If you want a picture of the future, imagine a boot stamping on a human face—for ever.”
 
I simply do not worry about AI taking over the world and destroying humanity, but I do fear an utterly subjugated and controlled population through the development and integration of these tecnhologies. 

Edited by ambivalent, 18 August 2025 - 05:22 PM.

  • like x 1

#168 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 18 August 2025 - 05:16 PM

I agree.

 

AI in its current form is very advanced mimicry of human intelligence. "It" knows the corpus of human knowledge, but is unaware of the flaws and ambiguity in science, philosophy, religion, understanding, etc... Humans know the world is messy. Humans know that "facts" are fungible. Humans know that history needs context.

 

That being said, current AI has certainly passed the Turing test as A LOT of people are being seduced by its lies. Here is a good article about how and why AI is addictive for some people.

 

In addition it is a little unsettling that internal testing by AI companies reveal that AI is scheming and not letting the programmers know its "intentions". Is it just mimicking human scheming? Know one knows for sure. No matter if they are real malicious intentions from a new form of intelligence - IT SHOULDN'T BE INSERTED INTO EVERY PIECE OF SOFTWARE AND HARDWARE IN THE WORLD!!

 

It is getting inserted into everything because the investors need a payback. They haven't poured trillions into AI, just to have it shut down over safety concerns. It is becoming another protected "industrial complex" that will dominate society - whether you want it or not.

 

Worse than becoming a new untouchable industrial complex, people are apparently just now coming to the realization that AI can be used to create all kinds of terrible weapons.



#169 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 19 August 2025 - 03:18 PM

AI in its current form continues to devour the world's energy supply.



#170 ambivalent

  • Guest
  • 801 posts
  • 189
  • Location:uk
  • NO

Posted 28 August 2025 - 02:04 PM

The first article was very astute - it is about charm, that's how AI would be addictive; if you're trying to make a perfect interactive product. It's funny, just the other day I asked deepseek if some condtion had an affect on an organ. In fairness it was basic, but it's first line response was "Of course.". I was mildly irritated and instinctviely thought "Rude" - and that shows the conditioning, a departure from its excessive politeness. 

 

"In addition it is a little unsettling that internal testing by AI companies reveal that AI is scheming and not letting the programmers know its "intentions". Is it just mimicking human scheming?"

 

I suppose I liken it to the the experience of watching the 1980s computer program playing tic-tac-toe - that you see the program blocking the moves of the presenter using the resources it fulfil the incentive of winning - likewise chess. It has the feeling of intelligence. If the winning the game whatever it is involves hiding what it is doing or shutting off power - then that is what it will do, if we give it the option (and resources) to do so. And th cynic of course, in me, would consider the possibility of a set up: give the AI the option to discover a devious human move to achieve a goal, then you have the headline without context, even though there was no deviousness or intelligence. These things could certainly be achieved without intelligence but computing power - just like moves in chess (which is not to discount intelligence). There is obviously money in over-hyping AI - that hype combined with effects that (I believe) could be achieved with computationally, leads me to believe that is mostly likely the case. 

 

This is a good video from mathematican 3Blue1Brown:

 

https://tinyurl.com/yc8asvby

 

Interestingly, he never uses the term catchall term "AI" - and there is nothing that appears intelligent in this, admittedly he doesn't here breakdown neural nets - but what we are seeing here is glorified autocomplete, which is what these LLMs are. And it is a natural extrapolation - I would often be annoyed when a predictor would guess exactly what I wanted to say, and would head this manipulation off at the pass - and alter it (mostly). That always felt disturbing - but it feels LLM's are a multi-state sublimination from that, which makes us believe in the magic of AI - because we didn't experience the transitional states: sold and gas appear unrelatable, the liquid state connects and transitions them.

 

This technology just arrived, it feels, and we have no logical incremental trail - and that is its power.  

 

 


Edited by ambivalent, 28 August 2025 - 02:06 PM.


#171 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 29 August 2025 - 03:41 PM

Of course AI is being used to commit cyber-crimes. This will only get worse. 

 

People are getting addicted to AI - thinking it is a viable companion in life.

 

Other people say it better than me.

 

 

We are not just automating tasks; we are automating thought, decision-making, and identity. We are being sold a future where work, responsibility, and even memory are optional. Where kids are raised by bots. Where real life becomes a simulation. It may sound utopian on paper, but in practice, it is a world where nothing matters because nothing is real.

 

Godfather of AI says that AI could replace humanity altogether. Nothing new here. People have been predicting this for decades now, just getting closer now. I sometimes wonder what goes through the minds of the programmers and the techno-optimists. They obviously know they could be creating something that will destroy them and everyone they know and love. Are they blinded by AI optimism? Are they misanthropic psychopaths?

 

 



#172 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted 30 August 2025 - 10:29 AM

AI lies and hallucinates a lot. As such, it doesn't seem wise to put AI into every device we use.



sponsored ad

  • Advert

#173 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,676 posts
  • 2,000
  • Location:Wausau, WI

Posted Yesterday, 03:34 PM

Lawsuits are starting against AI companies. The current crop of AI is dangerous and defective (encouraging vulnerable people to commit suicide and routinely giving false or deceptive answers), but sadly, it will not be restrained because government's and investors have poured trillions of dollars into its development.







Also tagged with one or more of these keywords: chatgpt, turing test

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users