AD,
Given the scale and marketing of AI this seems quite thin gruel - we expect much progress anyway, and there is a lot of computational power behind this progress. Pluribus, for example was quite a few years ago, but I am sure this was computationally driven, I don't sense it is AI even if we dress it up as so. Computers were beating chess, before AI, and they would beat poker without it in time, we would expect this w/o AI. We have been throwing a lot of effort at protein folding for a long time through computing power - and it is terriffic progress, and that is not to speak about the power of the software solving it, but is it really AI? I would have expected so much more.
I remember speaking a couple of years ago causally about the subject - and much of the attitude "people don't realise what this will do, it is exponential" - that has been said a lot. Well, that's not what we have seen or I have experienced with chatgpt or of the world around in two years, but I see a lot of advertising. The world changes under the covers as it always has, and that will surface
I agree chat / deepseek are extremely useful tools, and I see them getting better, but not improve beyond what I have come to expect from technology in general. There are new versions coming out that are better than the last ones. It feels like software development, not AI.
Given the resource and sheer scale into this exponential, I expected so much more in a couple of years which hasn't happened, progress has matched my experience I suppose.
I do worry though that AI will be used as a mechanism of control by power - the masses have pushed back in recent years. If we reach the point of no longer being able to trust the web, or the sources we discover, then we will be subdued again.
One of things I had hoped for is that it would see relationships missed on pubmed, and see the undoubted signals in the noise. But my experience has not taught me this. I was looking at a paper with a very interesting result, which the researchers didn't report or pay attention to, it didn't either until I pointed it out - then it was "I missed that". Humanisng itself, when it just wasn't going to spot it because of how it interprets/processes information. It wasn't trained to look for something unusual, of value, as it seems we through evolutionary pressure are.
I expect AI to be spotting what I spot and more, but it doesn't. It can do a brilliant summary, translate what the paper is saying, so I can understand it better - all of which is very useful - but that can be achieved through interpolation between lots of data, a lot of averaging.
When it comes to art say, I imagine what I think true AI should do, but what it presently doesn't. Could for example, AI have "discovered Van Gogh", if he had never been born, or Shakesepare, or Mozart? Could it create an artist, and work? I don't see any sense of this - just a lot of averaging between the artistic spaces humans have created.
I feel that AI in its current form is being burdened by its every increasing mass, slowed down. As adults we seem like this, not able to imagine as wildly as we could when young - there is to much clutter, too defined in our thinking, It doesn't feel AI has that much flexibility.
I do believe it is a remarkable technology, but also very crap at times, in ways we would never have tolerated before. I don't see progress down this route, I don't see intelligence, or consciousness developing in chat/deepseek - but a lot of money made.
Aside, here Hinton making a terrible argument as to why he believes AI is already conscious:
https://youtu.be/vxk...Z2jrp5fe2&t=372
This felt like sophistry. The argument is false, and he surely must know it but puts it forward anyway.
For one thing he is speaking about substituting out organic brain cells for synthetic cells which do the "same thing" (which do not exist I assume) in a system which is already conscious, a system which we could not presently design nor fully understand. And even were such inductive logic true, that you would still be you and conscious were all your organic cells replaced by synthetic ones, does that in anyway imply that some system designed out of synthetic cells, which could in theory be replaced within us and retain our consciousness, would therefore be conscious?
It is a ludicrous argument - that all you need is the ingredients and not the recipe - yet he makes it. And if that really is the model in his mind, which makes him believe AI is conscious, then I would struggle to his predictions seriously.
There are so many leaps in his argument, yet he attempts to persuade the host this is some irrefutable "proof by indiuction" reasoning*.
It is much easier, I'd imagine to create software that appears intelligent, than is intelligent, through sheer force of computing and there would be money in it - so it is perhaps not surprising that's where we are first.
Rightly or wrongly Penrose cleared my mind on this - I think he is right, it is computational power not intelligence. I guess we just have to see it play out, but it is all the same a dangerous technology.
*Sill I was on board with him here!