• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

AI soars past the turing test

chatgpt turing test

  • Please log in to reply
162 replies to this topic

#151 ambivalent

  • Guest
  • 796 posts
  • 187
  • Location:uk
  • NO

Posted 03 June 2025 - 03:22 PM

Thanks AD, that's very interesting.

 

I was struggling for a metaphor the other day - I don't have many but the speed of light came to mind, the increasing mass is limiting - more and more energy needed for marginal gains in speed. It has felt that  with all of this mass of information, subtly and insights become lost, and so more isn't going to be better, naturally - just worse. I go back to Chomsky when around 2 and half years ago, a few months before his stroke, he said that human-intelligence is about making connections on very limited information. It seems we are designed mostly very well for this, unsurprisingly, though harsh evolution - these models seems the opposite (to me) - fairly simple in design (a big matrix) with an overwhelming amount of information to make judgements upon. 

 

 

If the model is built on ever increasing mass, we can imagine the struggle. Humans seem a little like this we start out as children with wide "varaince" making weird and unusal connections adults don't, this seems to decline with  an ever gorwing mass of information. It is interesting that a reported trait of several geniuses were that they reamained playful. Sometimes we see experts switch fields and make major breakthroughs, they know less but are able to make unusual connections and solve hard problems.

 

 

At the end, I was a little surprised by this:

 

"In the context of large language models, research found that training LLMs on predecessor-generated text — language models are trained on the synthetic data produced by previous models — causes a consistent decrease in the lexical, syntactic, and semantic diversity of the model outputs through successive iterations, notably remarkable for tasks demanding high levels of creativity"

 

 

This seems like a bad design idea, the models don't know reality, and then external world stops being a stimulus - the information they received were it seemed, analagous to the shadows in Plato's cave, anyhow. Once it starts using itself, it's own answers, as learning, then there may become fairground-mirror images of those shadows - and eventually lost through interations of these images. Beliefs in this model seem reinforced by itself - a pseudo stimulus, rather than the reality it is trying to understand, it would seem.

 

I must admit there does seem a blandness to AI at times. It both seems creative but missing it at the same time. 

 

Again thanks, that was very interersting - it does seem in line with the experience that these models haven't developed as hoped. I didn't run through the maths, that skill is presently flat-packed in the ivory tower loft! Another one of our useful evolutionary-adaptations it seems - it's been years since I rode a bicycle too!  

 

 

(I was rushed in the previous post, there a few errors which completely inverted the meaning of the sentence, but hopefully the general direction and context of the post made this fairly clear!) 

 


Edited by ambivalent, 03 June 2025 - 03:27 PM.


#152 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,642 posts
  • 2,000
  • Location:Wausau, WI

Posted 04 June 2025 - 03:49 PM

I don't understand why all this hate towards AI. Even if AI turns out to be just a great information grinder and pattern recognition tool, it will catapult knowledge and advances in biosciences and various health-related areas that it would take us measly humans decades and centuries to do. 

 

There is absolutely no way we get anywhere close to beating aging or achieving LEV without the help of AI to compress decades and centuries of advances in just years. 

 

Sure there are risks but that is so with any new technology. The more powerful the technology the higher the risks; we as a species must find ways to contain and manage them. Otherwise we might as well go back to the caves and back to using sticks and stones, there's certainly no existential risk there (until an asteroid or something else wipes us out).

 

I don't hate AI, I just don't like the way it is being used so far.

 

We are not getting solutions to aging. AI isn't solving nuclear fusion for energy. AI isn't curing cancer. AI isn't developing new propulsion for rockets, etc... Sure it is assisting in some ways, but it is not directly solving anything. Ever since the 1960s, AI optimists have been predicting. It hasn't arrived yet.

 

Consider that we have had supercomputers, massive data storage, world-wide collaboration, and automated lab equipment for a few decades now - all leveraged toward curing diseases and aging, yet we have nothing. I am unsure that the current AI will help, except for tiny incremental steps.

 

Why I am not completely optimistic about AGI in the near future is because of the current trends. AI is being used to kill people in war, control the population, and create a massive cybercrime industry. Students in school are becoming less smart and less creative because they are relying upon AI to "do their homework". Coders use AI even though its error rate is over 30%.

 

Since the debut of ChatGPT in 2023 all of the advancements in AI have been to make ever more realistic digital videos, audio, media. It is hard to fathom how much time and energy is being spent on making digital videos. Hardly anything is being done about aging.



sponsored ad

  • Advert

#153 Advocatus Diaboli

  • Guest
  • 629 posts
  • 641
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 04 June 2025 - 04:45 PM

Here is some of the "Hardly anything is being done about aging." with regard to AI:

From DeepSeek AI:

Recent advances in AI for aging research (often called "Longevity AI") are accelerating breakthroughs in understanding, diagnosing, and even reversing age-related decline. Here are some key developments:

1. AI in Drug Discovery for Aging (Senolytics, Rapamycin Analogs, etc.)
  • Target Identification: AI models (e.g., DeepMind’s AlphaFold, Insilico Medicine’s GAN-based platforms) predict protein structures and identify aging-related drug targets (e.g., senescent cell markers, mTOR pathways).

  • Repurposing Existing Drugs: AI screens FDA-approved drugs for anti-aging potential (e.g., rapamycin analogs, metformin combinations).

  • ExampleInsilico Medicine used AI to design a senolytic drug (targeting aging cells) now in clinical trials.

2. AI-Powered Biomarkers of Aging ("Aging Clocks")
  • Epigenetic Clocks: AI (e.g., DeepMAge, Horvath’s Clock) analyzes DNA methylation to predict biological age.

  • Multi-Omics Clocks: AI integrates data from genomics, proteomics, metabolomics to track aging (e.g., Altos Labs’ AI models).

  • Wearable Data: Companies like Deep Longevity (Human Longevity Inc.) use AI to predict aging from smartwatch/sensor data.

3. Early Disease Detection (Alzheimer’s, Cancer, CVD)
  • Retinal Scans: Google’s DeepMind can predict cardiovascular risk & Alzheimer’s from eye images.

  • Voice Analysis: AI detects Parkinson’s & cognitive decline from speech patterns (e.g., Winterlight Labs).

  • Blood Tests: Startups like GRAIL use AI to detect early-stage cancer from liquid biopsies.

4. Personalized Longevity Medicine
  • AI-Driven Interventions: Companies like Lifespan.io and Elysium Health use AI to recommend personalized supplements, diets, and exercise plans based on biomarkers.

  • Gene Therapy Optimization: AI helps design CRISPR-based therapies for age-related gene editing (e.g., Rejuvenate Bio’s work in dogs).

5. Robotics & AI for Elderly Care
  • Social Robots: AI-powered companions (e.g., ElliQ by Intuition Robotics) reduce loneliness.

  • Fall Detection: AI in smart homes (e.g., Samsung’s SARA robot) monitors elderly mobility.

  • Exoskeletons: AI-assisted suits (e.g., SuitX) help seniors maintain mobility.

6. AI in Caloric Restriction & Fasting Mimetics
  • Nutrient Sensing: AI models (e.g., Nutricia’s algorithms) optimize fasting-mimicking diets.

  • Gut Microbiome Analysis: AI (e.g., Seed Health’s platform) suggests probiotics for longevity.

Key Players in AI & Longevity
Company/Institution Focus Area Insilico Medicine AI-designed anti-aging drugs Altos Labs (Jeff Bezos-backed) Cellular reprogramming via AI Calico Labs (Google/Alphabet) AI for aging biomarkers Deep Longevity Epigenetic aging clocks Life Biosciences Mitochondrial repair via AI
Future Outlook
  • Clinical Trials: AI is speeding up trials for senolytics, NAD+ boosters, and telomere therapies.

  • Digital Twins: AI creates virtual patient models to simulate aging interventions.

  • Ethical AI: Debates on AI bias in aging research (e.g., underrepresentation of elderly in datasets).



#154 forever freedom

  • Guest
  • 2,366 posts
  • 67

Posted 04 June 2025 - 05:04 PM

I don't hate AI, I just don't like the way it is being used so far.

 

We are not getting solutions to aging. AI isn't solving nuclear fusion for energy. AI isn't curing cancer. AI isn't developing new propulsion for rockets, etc... Sure it is assisting in some ways, but it is not directly solving anything. Ever since the 1960s, AI optimists have been predicting. It hasn't arrived yet.

 

Consider that we have had supercomputers, massive data storage, world-wide collaboration, and automated lab equipment for a few decades now - all leveraged toward curing diseases and aging, yet we have nothing. I am unsure that the current AI will help, except for tiny incremental steps.

 

Why I am not completely optimistic about AGI in the near future is because of the current trends. AI is being used to kill people in war, control the population, and create a massive cybercrime industry. Students in school are becoming less smart and less creative because they are relying upon AI to "do their homework". Coders use AI even though its error rate is over 30%.

 

Since the debut of ChatGPT in 2023 all of the advancements in AI have been to make ever more realistic digital videos, audio, media. It is hard to fathom how much time and energy is being spent on making digital videos. Hardly anything is being done about aging.

 

I agree that AI is still very far from fulfilling what we expect from it, because expectations are so high. But AI has already gave us Alpha Fold, it was also essential for us to quickly develop an anti COVID vaccine, and it's increasingly being used to find novel molecules and drugs that then need time for trials, and many other examples.

 

Compared to what we expect from it, it still falls short; the expectations are sky high. Here the good ol' lake analogy seems appropriate. Start with a drop in the lake, double it, and after X doublings the lake is full. But at just 5 doublings before X, we can barely see water in the lake, with only 3% of it filled. I believe we are a few steps/"doublings" before the true takeoff and where the magic happens. You talk about us already having decades of supercomputers and other computing technologies, we have been slowly filling the lake, doubling the water in it, for decades now. We are reaching the tipping point.

 

I already can't imagine myself living without AI; I already find it incredibly useful in my daily life. It's certainly already the most incredible tool to come about in a very very long time.

 

AGI is just around the corner, and soon thereafter we shall get the big scientific advances you ask about, and that I agree with you we have not yet achieved. 



#155 ambivalent

  • Guest
  • 796 posts
  • 187
  • Location:uk
  • NO

Posted 20 June 2025 - 03:32 PM

AD's recent posts made me think that longecity-guidelines are needed when using chatgpt/deepseek as supporting evidence; these LLMs are too unreliable to be considered as end-references. They are tools for further research - I don't mind the pastes but users must be encouraged to provide references to these claims. There is a lot of ambiguity, false statements. and it isn't different to get these models to change their minds (like poker-plutibus). But I sense real danger in dialogue if using AI outputs to shut down arguments - they can build the foundation of an argument, but they must not be it, to close it - and that may be the effect, intended or not.

 

Forever Freedom, the endless doubling of a drop of water is a seductive analogy, and if it were to mirror it (like say Moores law) then we could reason this way. But we don't have a basis to assume this, I don't think. A software product doesn't get better and better because the hardware relentlessly improves, it is bounded by its code - it can't solve new problems just because the chip capacity doubles every two years.

 

For some software, chip capacity is a limiting factor when solutions are needed in a timely way. But some software will never solve certain problems, it is the limitation of the model. That is how we should see AI, I beleive. What it can solve well, is likely to be down to the characteristics of the problem.

 

Chess is a real world problem that maps to computers especially well. Poker for example is different. A human chess player can never beat a computer, and it to the best of my understanding do better against a human than say Fisher.

 

The best poker player in the world could never beat the best poker-algorithim but the best human player could drastically outperform a computer against other humans. When a human plays a computer head to head, it plays on the computer's terms, so to speak. In chess there are no real world factors that change things. It is possible, I suppose, that a human could expect to beat a human chess opponent quicker than a computer/AI, because it may know better than a computer the mistakes it will make. If you as poker player have a read on a player, than you can make a decision better than any AI because you can reduce the solution space. With that reduced solution space, the algorithims would do far better than humans once again - but it doesn't have it - and has no way of getting to it. The AI under its current design would never be able to solve the real-world problem of playing poker optimally, no matter how much time we give it - but it can (and has some time ago) done so with chess. 

 

As Mind observed, we see amazing AI generated videos, this is a problem it can solve quickly and easily, and we see those improvements scale up over a short period time. I don't find that surprising because it is just interpolating between known truths (or sample data) and accepting or rejecting on that basis extremely efficiently. So it doesn't seem to difficult to imagine fake videos of celebs can be convincingly made. Making fake videos seems to me to be be rather like an extension of the now dropped term of mining big-data for exploitable patterns. Being a bit wrong with fake videos is ok, but being a bit wrong with a complex system isn't (as in chaos theory).

 

What AI cannot do is still kind of obvious when we are asked to identify for example the pictures that contain stairs or motorcycles to procede on a search engine, we do this easily - AI doesn't obviously. Because we don't need huge information to perform these tasks - as Chomsky said humans make intelligent judgements on limited information. 

 

AI is software and as with all software the usefulness of it's solutions depend on how well that software has the capacity to map the system it is modelling.

 

AI doesn't know reality and if a critical chunk of that reality is missing, then I am not sure how we can expect it to solve such problems.

 

To put it one way suppose the information it has on biology - our measurements of a century or more - could fit an infinite number of theoretical biological systems (our bodies exist within a universe, and are subject to its rules which we do not fully understand - and neither does AI) - we can't expect all the answers  This is perfectly possible. Mathematical Infintity can exist under constraints, indeed under infinite constraints. And every time we add a piece of information we are just adding a constraint (which also may not be true, as we know from research). That doesn't mean there are not statements of truth with these models that relate to reality, that can be revealed with the help of computing/AI.  It is a tool that will be appropriate and inappropriate not human intelligence just better - nothing I see makes me think it is heading that way. It has been 2 years, an exponential growth should have seen it overtake us, if it is aligned to intelligence like ours - but it clearly isn't. It does amazing things as computers always have.  

 

So I suppose in my imaginings I believe the AI can rapidly exhaust certain solvable finite problems quickly, but be left hopelessly marooned on more complex ones. We know from our experience, that we can have too much information - our insights don't come from information overload - our intelligence grows with information, not solely because of it - it is what our brain structure does with that information. But we are building AI more on information than intelligence, it seems to me. Anyhow, much of this is repetition. 

 

 

My concern with "AI" really is the brainwashing aspect of it all, it feels indoctrinating. MSM is losing its foothold in containing the masses, we see it now in the deepening poltical crisis, people have other resources. I have felt that chatgpt will be a device to subjugate people once again. MSM has always been about creating an average or normalised belief amongst the population - that has gone. Selling AI as intelligent, is attractive to free-thinkers - we can make sense of the world with it's help, if we ask the right questions. If society believe it is true and honest - just like it did with MSM - then once again society can be controlled.

 

This MIT study reveals enough for us to be very concerned:

 

https://www.media.mi...ain-on-chatgpt/

 

Discussed further here:

 

 

 

The discussion about AI's intelligence, I feel is a sideshow - more of a distraction - compared to the effect it has on our capacity to think and used as a device of control. 

 

OpenAI have just been given a $200 million defence contract. Same old.  

 

 

 


Edited by ambivalent, 20 June 2025 - 03:46 PM.

  • Good Point x 2

#156 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,642 posts
  • 2,000
  • Location:Wausau, WI

Posted 20 June 2025 - 03:54 PM

 

The discussion about AI's intelligence, I feel is a sideshow - more of a distraction - compared to the effect it has on our capacity to think and used as a device of control. 

 

OpenAI have just been given a $200 million defence contract. Same old.  

 

Agreed. All I see AI being used for right now  - the profit-making use cases - is to control people and manipulate them to squeeze every last dollar out of their bank account.

 

It occurred to me that the Matrix "humans-used-as-batteries" scenario is not that far off the mark. AI is making people stupid and lazy. If this trend continues, then the only purpose for people will be to "feed" the machine. Humans will become "commercial batteries" whose only purpose is to keep buying useless material items and consuming ever more entertainment so that the big tech owners of the AI can keep making money.

 

Sadly, AI is not currently solving any big problems we face (energy, longevity, space travel).


Edited by Mind, 20 June 2025 - 03:55 PM.


#157 Advocatus Diaboli

  • Guest
  • 629 posts
  • 641
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 20 June 2025 - 09:17 PM

Ambivalent, I, along with you, would prefer to see references for claims. When I make claims of fact I generally supply references. Exceptions would include not providing references for claims or facts that I consider to be well known. For example, I wouldn't cite Whitehead and Russell’s  "Principia Mathematica" in order to substantiate a claim that 1+1=2. Likewise, I generally won't provide references for assertions made by others, an AI for example, that I may quote.

 

I, for one, would like to see references substantiating "The best poker player in the world could never beat the best poker-algorithim but the best human player could drastically outperform a computer against other humans." as it appears to me to be a statement of what I would characterize as being a claim of fact rather than a statement of opinion.

 

Had you included something such as "I think" as a preface, I wouldn't have a problem. Well, notwithstanding the seemingly contradictory nature of the statement--i.e. how can the "best player in the world" "drastically outperform" something (the poker algorithm) that can't be beaten by humans? Unless "outperform", as you use it, means something other than winning.

 

 

 

 



#158 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,642 posts
  • 2,000
  • Location:Wausau, WI

Posted 23 June 2025 - 04:59 PM

One reason that AI cannot "solve" aging right now, or even come close to helping - is that a significant majority of health and aging research is junk.



#159 Advocatus Diaboli

  • Guest
  • 629 posts
  • 641
  • Location:Chronosynclastic Infundibulum ( floor Z/p^nZ )
  • NO

Posted 23 June 2025 - 06:22 PM

Mind writes in post #158 

 

"or even come close to helping" referring to AI use to " 'solve' aging".

 

What is it that you mean by "even come close", Mind? Seems like a pretty bold statement.

 

Perhaps you could identify some benchmarks that you think would indicate that AI contributions are getting close to solving the problem?


Edited by Advocatus Diaboli, 23 June 2025 - 06:42 PM.


#160 ambivalent

  • Guest
  • 796 posts
  • 187
  • Location:uk
  • NO

Posted 24 June 2025 - 04:12 PM

Ambivalent, I, along with you, would prefer to see references for claims. When I make claims of fact I generally supply references. Exceptions would include not providing references for claims or facts that I consider to be well known. For example, I wouldn't cite Whitehead and Russell’s  "Principia Mathematica" in order to substantiate a claim that 1+1=2. Likewise, I generally won't provide references for assertions made by others, an AI for example, that I may quote.

 

I, for one, would like to see references substantiating "The best poker player in the world could never beat the best poker-algorithim but the best human player could drastically outperform a computer against other humans." as it appears to me to be a statement of what I would characterize as being a claim of fact rather than a statement of opinion.

 

Had you included something such as "I think" as a preface, I wouldn't have a problem. Well, notwithstanding the seemingly contradictory nature of the statement--i.e. how can the "best player in the world" "drastically outperform" something (the poker algorithm) that can't be beaten by humans? Unless "outperform", as you use it, means something other than winning.

 

 

 

AD,

 

My post wasn’t meant to be a personal jibe - I apologise if it were received as such. 

 

I am certainly not the "cite reference" police; after 12 years here I have twice appended “Needs References” to a post. I will have lazily omitted references at times, but am generally fairly good on the important things - that isn’t the point, and any errors my post may have contained, and I will come to that later, is not hypocrisy, because, and I think this was quite clear, I am arguing for the establishment of reference-standards within AI produced content - which is an significant and emerging source of material.

 

Those standards which you sort to address within my post would fall under general forum guidelines and expectations - most of us meet, exceed and fail in those obligations from time to time. It will not be hard to scroll through my posts and find instances of such failures, though in this instance I do not believe it was necessary (nor contradictory, as I will hope to demonstrate in the second part of this post).

 

As mentioned I am not a pedant on such things, I, like most, can often discern when a person is stating something as fact when it is clearly an opinion - but often both parties are aware it is an opinion, as is such in language.

 

References are, as I am sure you agree, important, they are courtesy to the reader, obligating the author to navigate by facts. Adherence to these norms is well established on this site, I am not trying or interested in trying to make those stricter. Posts can have too many references, and I will not always include them if I feel a validating reference is easily found - which is quite normal 

 

The central only issue of the post was to establish guidelines when using  AI generated content.  

 

It was your response to Mind’s comment via deep/seek, I felt a measure of discomfort and needed to reflect upon. 

 

Here’s the problem: 

 

Suppose you, as opposed to deepseek, had written that post a couple of years ago. It would likely have taken many hours to put together, rather than the matter of seconds for the AI-algorithm. Had this post been written by you, you would have expected of yourself, and been expected to provide references.

 

When posting this way there is the implicit assumption that explicit references are not needed, that the AI is in of itself a reliable reference - this, as we all know from experience, is not true. In fact, you directly contributed to this observation when providing the reference LLM model collapse

 

As we have been discussing there is a real danger that AI becomes the accepted authority and your post implicitly assumed this to be true - otherwise you would have provided references for a post that would have needed several, had it been posted by a forum-contributor. 

 

In this scenario described, the onus on Mind to disprove the AI statements, which could take all day, having only invested 10 seconds yourself. Unless the person is willing to put in that investment, the argument is shut down. And that is a problem.

 

The AI becomes the authority because it can easily generate content that takes considerable time and effort to verify or disprove. It becomes easier to accept, because it is “probably or mostly correct”.

 

Finding references is the least that should be done, if they can’t be found, then it should not be included - or at the least it is indicated that references were searched for but not found.

 

AI is known to fabricate, change its mind under light interrogation. In particular in this discussion there is ambiguity as to what is actually produced within the field of AI, rather than AI itself - as was the case with pluribus. And indeed whether what is actually defined as AI, is in fact Artificial Intelligence - and not dressed up autofill.

 

So once again it is a courtesy to the reader to provide this additional research, not for the reader to do the disproving, when no effort has been made by the poster to prove the AI’s statements. 

 

I consider your post to have been a very useful contribution - that wasn’t the point of my post - I think this software can put us in the right place to mine, provide leads allowing us to research further.

 

That is why we need forum-guidelines - we cannot accept AI generated content as an authority -  we can not just paste its content and expect others to sort it out, we must contribute significantly to that end ourselves. 

 

If not we will fall into a trap. I like many will remember the early days of search engines, where you would get an honest answer to your request - or the best effort. Now of course, search engines weight rankings by profit - but we were trained in the early days to believe the searches were authentic, and were in the end manipulated for profit. 

 

This of course is how AI could turn out, we become trained to trust it, and become too lazy to challenge it - that is why I believe we need certain rules around AI content - that was the purpose of the post - not to initiate a forum crackdown on unreferenced assertions. 

 

In general the importance of the statement, how central it is, guides as to the need for a reference; as stated I don’t believe my poker-content needed a reference, but it wasn’t important enough content, regardless (I would say) - but that is a subjective position - as said, I don’t have a history of down rating people for this, and if feel the post needs a reference I will ask, and do on occasion.   

 

In the post I put out, what is missing is not a reference, but perhaps a good enough explanation since it is reason-based.

A good guide for AI generated content, I would suggest, is to provide the references had you written the post yourself

 

I am certainly not keen on shutting down discussions with content provided by AI tools, that take a few seconds to produce, and potentially hours to disprove or verify, that would be a bad precedent, and kill debate. The poster should seek to verify the AI content, before posting - as a courtesy to the reader: it is far too unreliable, and will at times represent misinformation: something we should avoid.     

 

The asymmetry of effort, could lead us to be force-fed AI content - and encourage us to accept, and not to challenge.

 

AI content is extremely useful, but unreliable - we shouldn’t cut and paste it, unless the thread is in its construction, permits it.

 

 

 

Now, on to the poker content - which is becoming somewhat OT.

 

 

There isn’t anything contradictory in those statements I presented, and references were not required, I believe, because the statement is driven by reason on top of some basic but perhaps poorly expressed assumptions. 

 

To speak of the best poker player in the world is somewhat lazy, I admit, because it requires a constraint. If asking a tennis enthusiast, as to who is the best tennis player in the world, we might be asked “On what surface”. And that rather applies to poker - there are different poker environments, the importance of differing skills can vary in importance depending on the game. Some games can be very technically-based, and others more improvised - playing and reading the player.

 

But the definition doesn’t matter too much here, as it doesn’t undermine the argument. 

 

By outperform, I mean as you would suggest to win more. 

 

So I am stating that even though a poker-algorithm may best every player in the world, the best player in the world could dramatically outperform the algorithm against some humans. This is no contradiction. For this discussion I mean the statement in one way, but could be easily shown to be true in another. 

 

The other way relates to game theory - a GTO strategy - a game-theoretic-optimal strategy - cannot be beaten: no player in the world could defeat it over time ~(nor perfectly replicate it, naturally, due to its complexity - it isn’t tic-tac-toe!). By definition a GTO strategy is only optimal against itself, against any other strategy put up against it is suboptimal.  As such other strategies could always win more against a non-GTO strategy, than GTO - but these strategies are always exploitable, except one GTO. GTO might be viewed as a defensive strategy - nothing can beat it, but it can always be improved upon against sub-optimal strategies. Buit deviate from GTO, and you yourself become exploitable - which is why it is a defensive strategy.

 

As I recall, Pluribus wasn’t adaptive - players are. Pluribus was playing against top players - which could a little ironically comparatively show it in its best light. If putting  two very bad players amongst the pros along with pluribus then there is a good chance the pros would outperform pluribus, because GTO would not be close to optimal against these bad players. Pro players would often be much closer to optimal strategy than GTO - against bad players. 

 

However, the point, though, that I was trying to make it relation to the subject discussion was that a top poker player could significantly outperform an algorithm against some given player  - i.e. be expected to win more - because the player is of the physical world and there are a vast number of variables in play that are not accessible to the algorithm or present when said given player plays the computer. 

 

So a skillful human will have the opportunity to reduce the solution space in cases where the algorithm can and does not. 

 

Once bringing in tells, then you change the problem space.The human, for example, may know his opponent almost never has Aces in this scenario because, say, he didn’t look at his watch, which he always does when raising with them, is trying to optimise a different problem to the computer, which cannot discount the player holding Aces because the bet size is consistent with holding Aces. So the human of course, can outperform the computer, if the information it gathers is significant enough.

 

When a computer and a top poker player compete against some given poker player - they may have the same objective, but they are trying to solve different problems; or perhaps better, they are trying to solve the same problem, but acting on different information. The poker pro has more information - and if it is better, the player could be expected to do better. 

When the poker pro plays the computer algorithm it is doing so on the algorithm's terms, its real world skills are redundant. 

 

With chess, there are next to no real world factors which can benefit the player over the computer: in poker that isn’t the case.   

 

The only assumption I am making here is that tells exist and are meaningful, that they are inaccessible to computer players but are to humans (which requires no references) - the rest is deduction. Against the same player the human and computer are solving different problems - so the human poker player may perform better than the computer against said player, even though it will be defeated against the computer. It’s not quite paper-scissor-stone, but that could serve as an analogy - where different attributes come into play against different opponents: there isn't an ordered hierachy.


Edited by ambivalent, 24 June 2025 - 05:05 PM.


#161 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,642 posts
  • 2,000
  • Location:Wausau, WI

Posted 24 June 2025 - 04:15 PM

Mind writes in post #158 

 

"or even come close to helping" referring to AI use to " 'solve' aging".

 

What is it that you mean by "even come close", Mind? Seems like a pretty bold statement.

 

Perhaps you could identify some benchmarks that you think would indicate that AI contributions are getting close to solving the problem?

 

I will have to think about benchmarks.

 

Admittedly, my perspective comes from my involvement in life extension advocacy for over 2 decades. To this day, there is nothing proven to extend human lifespan, in spite of of all the research, computing power, and money that has been thrown at the problem. We are not making any progress perhaps because most health and medical research is "junk" science, to put it harshly.



#162 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,642 posts
  • 2,000
  • Location:Wausau, WI

Posted 29 June 2025 - 09:48 AM

Here is an article which summarizes everything I have mentioned over the last year in regards to the trends in AI (I did not write the article, even though the thoughts are very similar).

 

 

 

Let's be clear: the same people who promised you connection, promised you truth, are now building the walls of your digital prison. The tech titans who once championed open standards and user sovereignty have become data barons, carefully controlling the narrative and the neural nets. Platforms that once amplified your voice now filter, flag, or erase it. Algorithms that once connected ideas now corral them into monetizable silos.

 



sponsored ad

  • Advert

#163 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,642 posts
  • 2,000
  • Location:Wausau, WI

Posted 01 July 2025 - 04:15 PM

Just another video about how AI is being used to enslave people - not solve the world's biggest problems. Notice how the corporate heads/researchers are not even hiding it anymore. They call it "choice engineering" because calling it manipulation, enslavement, propaganda, doesn't sound very good to the marketing team.


  • Informative x 1





Also tagged with one or more of these keywords: chatgpt, turing test

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users