Ambivalent, I, along with you, would prefer to see references for claims. When I make claims of fact I generally supply references. Exceptions would include not providing references for claims or facts that I consider to be well known. For example, I wouldn't cite Whitehead and Russell’s "Principia Mathematica" in order to substantiate a claim that 1+1=2. Likewise, I generally won't provide references for assertions made by others, an AI for example, that I may quote.
I, for one, would like to see references substantiating "The best poker player in the world could never beat the best poker-algorithim but the best human player could drastically outperform a computer against other humans." as it appears to me to be a statement of what I would characterize as being a claim of fact rather than a statement of opinion.
Had you included something such as "I think" as a preface, I wouldn't have a problem. Well, notwithstanding the seemingly contradictory nature of the statement--i.e. how can the "best player in the world" "drastically outperform" something (the poker algorithm) that can't be beaten by humans? Unless "outperform", as you use it, means something other than winning.
AD,
My post wasn’t meant to be a personal jibe - I apologise if it were received as such.
I am certainly not the "cite reference" police; after 12 years here I have twice appended “Needs References” to a post. I will have lazily omitted references at times, but am generally fairly good on the important things - that isn’t the point, and any errors my post may have contained, and I will come to that later, is not hypocrisy, because, and I think this was quite clear, I am arguing for the establishment of reference-standards within AI produced content - which is an significant and emerging source of material.
Those standards which you sort to address within my post would fall under general forum guidelines and expectations - most of us meet, exceed and fail in those obligations from time to time. It will not be hard to scroll through my posts and find instances of such failures, though in this instance I do not believe it was necessary (nor contradictory, as I will hope to demonstrate in the second part of this post).
As mentioned I am not a pedant on such things, I, like most, can often discern when a person is stating something as fact when it is clearly an opinion - but often both parties are aware it is an opinion, as is such in language.
References are, as I am sure you agree, important, they are courtesy to the reader, obligating the author to navigate by facts. Adherence to these norms is well established on this site, I am not trying or interested in trying to make those stricter. Posts can have too many references, and I will not always include them if I feel a validating reference is easily found - which is quite normal
The central only issue of the post was to establish guidelines when using AI generated content.
It was your response to Mind’s comment via deep/seek, I felt a measure of discomfort and needed to reflect upon.
Here’s the problem:
Suppose you, as opposed to deepseek, had written that post a couple of years ago. It would likely have taken many hours to put together, rather than the matter of seconds for the AI-algorithm. Had this post been written by you, you would have expected of yourself, and been expected to provide references.
When posting this way there is the implicit assumption that explicit references are not needed, that the AI is in of itself a reliable reference - this, as we all know from experience, is not true. In fact, you directly contributed to this observation when providing the reference LLM model collapse.
As we have been discussing there is a real danger that AI becomes the accepted authority and your post implicitly assumed this to be true - otherwise you would have provided references for a post that would have needed several, had it been posted by a forum-contributor.
In this scenario described, the onus on Mind to disprove the AI statements, which could take all day, having only invested 10 seconds yourself. Unless the person is willing to put in that investment, the argument is shut down. And that is a problem.
The AI becomes the authority because it can easily generate content that takes considerable time and effort to verify or disprove. It becomes easier to accept, because it is “probably or mostly correct”.
Finding references is the least that should be done, if they can’t be found, then it should not be included - or at the least it is indicated that references were searched for but not found.
AI is known to fabricate, change its mind under light interrogation. In particular in this discussion there is ambiguity as to what is actually produced within the field of AI, rather than AI itself - as was the case with pluribus. And indeed whether what is actually defined as AI, is in fact Artificial Intelligence - and not dressed up autofill.
So once again it is a courtesy to the reader to provide this additional research, not for the reader to do the disproving, when no effort has been made by the poster to prove the AI’s statements.
I consider your post to have been a very useful contribution - that wasn’t the point of my post - I think this software can put us in the right place to mine, provide leads allowing us to research further.
That is why we need forum-guidelines - we cannot accept AI generated content as an authority - we can not just paste its content and expect others to sort it out, we must contribute significantly to that end ourselves.
If not we will fall into a trap. I like many will remember the early days of search engines, where you would get an honest answer to your request - or the best effort. Now of course, search engines weight rankings by profit - but we were trained in the early days to believe the searches were authentic, and were in the end manipulated for profit.
This of course is how AI could turn out, we become trained to trust it, and become too lazy to challenge it - that is why I believe we need certain rules around AI content - that was the purpose of the post - not to initiate a forum crackdown on unreferenced assertions.
In general the importance of the statement, how central it is, guides as to the need for a reference; as stated I don’t believe my poker-content needed a reference, but it wasn’t important enough content, regardless (I would say) - but that is a subjective position - as said, I don’t have a history of down rating people for this, and if feel the post needs a reference I will ask, and do on occasion.
In the post I put out, what is missing is not a reference, but perhaps a good enough explanation since it is reason-based.
A good guide for AI generated content, I would suggest, is to provide the references had you written the post yourself
I am certainly not keen on shutting down discussions with content provided by AI tools, that take a few seconds to produce, and potentially hours to disprove or verify, that would be a bad precedent, and kill debate. The poster should seek to verify the AI content, before posting - as a courtesy to the reader: it is far too unreliable, and will at times represent misinformation: something we should avoid.
The asymmetry of effort, could lead us to be force-fed AI content - and encourage us to accept, and not to challenge.
AI content is extremely useful, but unreliable - we shouldn’t cut and paste it, unless the thread is in its construction, permits it.
Now, on to the poker content - which is becoming somewhat OT.
There isn’t anything contradictory in those statements I presented, and references were not required, I believe, because the statement is driven by reason on top of some basic but perhaps poorly expressed assumptions.
To speak of the best poker player in the world is somewhat lazy, I admit, because it requires a constraint. If asking a tennis enthusiast, as to who is the best tennis player in the world, we might be asked “On what surface”. And that rather applies to poker - there are different poker environments, the importance of differing skills can vary in importance depending on the game. Some games can be very technically-based, and others more improvised - playing and reading the player.
But the definition doesn’t matter too much here, as it doesn’t undermine the argument.
By outperform, I mean as you would suggest to win more.
So I am stating that even though a poker-algorithm may best every player in the world, the best player in the world could dramatically outperform the algorithm against some humans. This is no contradiction. For this discussion I mean the statement in one way, but could be easily shown to be true in another.
The other way relates to game theory - a GTO strategy - a game-theoretic-optimal strategy - cannot be beaten: no player in the world could defeat it over time ~(nor perfectly replicate it, naturally, due to its complexity - it isn’t tic-tac-toe!). By definition a GTO strategy is only optimal against itself, against any other strategy put up against it is suboptimal. As such other strategies could always win more against a non-GTO strategy, than GTO - but these strategies are always exploitable, except one GTO. GTO might be viewed as a defensive strategy - nothing can beat it, but it can always be improved upon against sub-optimal strategies. Buit deviate from GTO, and you yourself become exploitable - which is why it is a defensive strategy.
As I recall, Pluribus wasn’t adaptive - players are. Pluribus was playing against top players - which could a little ironically comparatively show it in its best light. If putting two very bad players amongst the pros along with pluribus then there is a good chance the pros would outperform pluribus, because GTO would not be close to optimal against these bad players. Pro players would often be much closer to optimal strategy than GTO - against bad players.
However, the point, though, that I was trying to make it relation to the subject discussion was that a top poker player could significantly outperform an algorithm against some given player - i.e. be expected to win more - because the player is of the physical world and there are a vast number of variables in play that are not accessible to the algorithm or present when said given player plays the computer.
So a skillful human will have the opportunity to reduce the solution space in cases where the algorithm can and does not.
Once bringing in tells, then you change the problem space.The human, for example, may know his opponent almost never has Aces in this scenario because, say, he didn’t look at his watch, which he always does when raising with them, is trying to optimise a different problem to the computer, which cannot discount the player holding Aces because the bet size is consistent with holding Aces. So the human of course, can outperform the computer, if the information it gathers is significant enough.
When a computer and a top poker player compete against some given poker player - they may have the same objective, but they are trying to solve different problems; or perhaps better, they are trying to solve the same problem, but acting on different information. The poker pro has more information - and if it is better, the player could be expected to do better.
When the poker pro plays the computer algorithm it is doing so on the algorithm's terms, its real world skills are redundant.
With chess, there are next to no real world factors which can benefit the player over the computer: in poker that isn’t the case.
The only assumption I am making here is that tells exist and are meaningful, that they are inaccessible to computer players but are to humans (which requires no references) - the rest is deduction. Against the same player the human and computer are solving different problems - so the human poker player may perform better than the computer against said player, even though it will be defeated against the computer. It’s not quite paper-scissor-stone, but that could serve as an analogy - where different attributes come into play against different opponents: there isn't an ordered hierachy.
Edited by ambivalent, 24 June 2025 - 05:05 PM.