So, how is the singularity-AI-technology issue supposed to work? What can a superhuman AI in fact contribute?
The singularity concept (my understanding)
As of my understanding the typical story of singularity (where a lot of people appear to be mentally fixed on to extend their lifes) goes as this:
It's all about the AI. Computers will get faster and faster, finally by far outmatching the raw processing power of the human brain – by the way, the raw processing power of our brains is not much larger than those of dolphin brains or elephant brains, so keep in mind that raw processing power does not equate intelligence. Also we are supposed to figure out how human brains work and what distinguishes them from similar mammalian brains to enable us to replicate it somehow in computers. The later point is not a small tasks and certainly much more difficult to develop than processing power, but I will leave aside the questions about the 20-30 year timeframe singularity proponents tend to assume.
So let’s assume we already have a superhuman AI established. As it is superhuman we naturally can’t understand today how exactly it will work – but remember that processing power does not equate intelligence. So we will have to manually improve our hopefully perfect understanding of the human brain and human intelligence to make it superhuman, as merely copying will not lead to superhuman AI. But as I remarked, I will leave aside those issues for the moment.
How exactly now will our superior AI cause an explosion of scientifical-technological progress, often described as overwhelming us humans? Basically it implies improving the speed of research and development 10 or 100 fold or more. Of course those results still would need to be implemented into real world production lines at sufficient speed, but again I will leave this mostly economical issue aside for the moment.
My objection: AI will not speed up progress
Being trained as a physicist I tend to subscribe to an understanding of science and technology that distinguishes between theoretical and experimental research. If you want – simplified – we develop theoretical concepts, such as the quantisation of energy levels within atoms, the idea to add certain components to achieve a catalytical reaction in the chemical industry etc. Those concepts are based on already existing experiments/knowledge and theories and are not crafted out of thin air. After developing our theoretical concept we need to confront it with empirical fact, e.g. we design an experiment (or build a proof of concept prototype in more applied fields). Once this is done we can judge whether it was a good concept (i.e. real world behaved as predicted), where the weaknesses are and hopefully get some ideas how to improve the concept. Those experiments can be short and cheap one man shows – suppose we predict the outcome of mixing two chemicals: if both are cheap and abounded we just mix them, measure and observe. Also those experiments can take years and be very expensive – we might even need to build a multi-billion particle accelerator and run it for a decade or two. However, once this is done and we have the results we extended “knowledge” a bit – so we now know that it worked (or didn’t) and can consider it in future research/development.
You already guess where I am heading with this. A true superhuman AI is likely to be a superior theoretical researcher. It will outmatch Einstein in a second and most purely theoretical researchers such as economists and astrophysicists at universities around the globe will be instantly unemployed. Also it will be a considerable additional aid in guiding experimental research. However – it can not replace experimental research and development, which is the truly time and resource consuming part of scientifical-technological progress. Experimental research is constraint by time and availability of funds, meaning paying for lab workers, lab space, equipment, chemicals, energy, and whatever resources are needed in the various fields of research. It simply took time for Rutherford to setup his famous scattering experiment, letting the alpha-particles pass through the gold foil, counting the impacts, deriving the distribution and of course repeating the experiment. It took quite some resources for Eddington to undertake his 1919 Africa expedition to observe the solar eclipse, thus proving Einstein right in his general theory of relativity.
It can well be the case that superhuman AIs can design more efficient setups for experiments, e.g. optimising the travel schedule and needed observation equipment for Eddington or improving the geometry of Rutherfords experiment. But it can not fundamentally alter the fact, that it takes considerable time and considerable resources to do those experiments. This is today’s and will be tomorrow’s principal bottleneck of research in general, no matter how intelligent we are.
The important bottleneck is funding, not ideas
To speed up progress we would need to substantially increase the percentage of the GDP that is spend on research – however, this is true independed of having an AI at hand or not. Also further automatisation of various processes will help to save sparse research resources, but of course it will do so with and without an AI being developed. And for the sake of completeness let us just imagine we have very cheap (at least cheaper than human labor), flexible (say humanoid) robots at hand, working day and night in our research labs. Granted that a direct communication of the AI with the robots saves some time over a human researcher telling the robots what to do – this still by far doesn’t match the kind of accelerating progress some illusionaries of the technological singularity imagine in their dreams.
In the end an AI is of not much more use than our current human researchers and human made progress. We have plenty of ideas how we could solve certain problems or just extending knowledge for the sciences sake; we apply for government, industry or venture capital funding, hoping that among the dozends of candidates our application will be the one accepted for the grant. Maybe we are disappointed if another project is selected, because we think that our idea is in our eyes more relevant to be investigated, even if it might fail (as all to often in basic research). But even an AI can not fundamentally change our limited research resources beyond that what can already be done without any AI.
In the end all those bright minds wasting their time on pursuing unworthy concepts as the singularity would be better advised to lobby their local MPs to increase governmental research funding. This is probably the easiest way of accelerating progress, especially as the envisioned singularity is nothing more than a ersatz-religion for the technology affiliated crowd.
Edited by TFC, 11 January 2011 - 10:59 PM.