• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Essay against AI-Singularity


  • Please log in to reply
53 replies to this topic

#1 Guest

  • Guest
  • 320 posts
  • 214

Posted 11 January 2011 - 10:56 PM


So, how is the singularity-AI-technology issue supposed to work? What can a superhuman AI in fact contribute?

The singularity concept (my understanding)

As of my understanding the typical story of singularity (where a lot of people appear to be mentally fixed on to extend their lifes) goes as this:

It's all about the AI. Computers will get faster and faster, finally by far outmatching the raw processing power of the human brain – by the way, the raw processing power of our brains is not much larger than those of dolphin brains or elephant brains, so keep in mind that raw processing power does not equate intelligence. Also we are supposed to figure out how human brains work and what distinguishes them from similar mammalian brains to enable us to replicate it somehow in computers. The later point is not a small tasks and certainly much more difficult to develop than processing power, but I will leave aside the questions about the 20-30 year timeframe singularity proponents tend to assume.

So let’s assume we already have a superhuman AI established. As it is superhuman we naturally can’t understand today how exactly it will work – but remember that processing power does not equate intelligence. So we will have to manually improve our hopefully perfect understanding of the human brain and human intelligence to make it superhuman, as merely copying will not lead to superhuman AI. But as I remarked, I will leave aside those issues for the moment.

How exactly now will our superior AI cause an explosion of scientifical-technological progress, often described as overwhelming us humans? Basically it implies improving the speed of research and development 10 or 100 fold or more. Of course those results still would need to be implemented into real world production lines at sufficient speed, but again I will leave this mostly economical issue aside for the moment.


My objection: AI will not speed up progress

Being trained as a physicist I tend to subscribe to an understanding of science and technology that distinguishes between theoretical and experimental research. If you want – simplified – we develop theoretical concepts, such as the quantisation of energy levels within atoms, the idea to add certain components to achieve a catalytical reaction in the chemical industry etc. Those concepts are based on already existing experiments/knowledge and theories and are not crafted out of thin air. After developing our theoretical concept we need to confront it with empirical fact, e.g. we design an experiment (or build a proof of concept prototype in more applied fields). Once this is done we can judge whether it was a good concept (i.e. real world behaved as predicted), where the weaknesses are and hopefully get some ideas how to improve the concept. Those experiments can be short and cheap one man shows – suppose we predict the outcome of mixing two chemicals: if both are cheap and abounded we just mix them, measure and observe. Also those experiments can take years and be very expensive – we might even need to build a multi-billion particle accelerator and run it for a decade or two. However, once this is done and we have the results we extended “knowledge” a bit – so we now know that it worked (or didn’t) and can consider it in future research/development.

You already guess where I am heading with this. A true superhuman AI is likely to be a superior theoretical researcher. It will outmatch Einstein in a second and most purely theoretical researchers such as economists and astrophysicists at universities around the globe will be instantly unemployed. Also it will be a considerable additional aid in guiding experimental research. However – it can not replace experimental research and development, which is the truly time and resource consuming part of scientifical-technological progress. Experimental research is constraint by time and availability of funds, meaning paying for lab workers, lab space, equipment, chemicals, energy, and whatever resources are needed in the various fields of research. It simply took time for Rutherford to setup his famous scattering experiment, letting the alpha-particles pass through the gold foil, counting the impacts, deriving the distribution and of course repeating the experiment. It took quite some resources for Eddington to undertake his 1919 Africa expedition to observe the solar eclipse, thus proving Einstein right in his general theory of relativity.

It can well be the case that superhuman AIs can design more efficient setups for experiments, e.g. optimising the travel schedule and needed observation equipment for Eddington or improving the geometry of Rutherfords experiment. But it can not fundamentally alter the fact, that it takes considerable time and considerable resources to do those experiments. This is today’s and will be tomorrow’s principal bottleneck of research in general, no matter how intelligent we are.


The important bottleneck is funding, not ideas


To speed up progress we would need to substantially increase the percentage of the GDP that is spend on research – however, this is true independed of having an AI at hand or not. Also further automatisation of various processes will help to save sparse research resources, but of course it will do so with and without an AI being developed. And for the sake of completeness let us just imagine we have very cheap (at least cheaper than human labor), flexible (say humanoid) robots at hand, working day and night in our research labs. Granted that a direct communication of the AI with the robots saves some time over a human researcher telling the robots what to do – this still by far doesn’t match the kind of accelerating progress some illusionaries of the technological singularity imagine in their dreams.

In the end an AI is of not much more use than our current human researchers and human made progress. We have plenty of ideas how we could solve certain problems or just extending knowledge for the sciences sake; we apply for government, industry or venture capital funding, hoping that among the dozends of candidates our application will be the one accepted for the grant. Maybe we are disappointed if another project is selected, because we think that our idea is in our eyes more relevant to be investigated, even if it might fail (as all to often in basic research). But even an AI can not fundamentally change our limited research resources beyond that what can already be done without any AI.

In the end all those bright minds wasting their time on pursuing unworthy concepts as the singularity would be better advised to lobby their local MPs to increase governmental research funding. This is probably the easiest way of accelerating progress, especially as the envisioned singularity is nothing more than a ersatz-religion for the technology affiliated crowd.

Edited by TFC, 11 January 2011 - 10:59 PM.

  • like x 1

#2 kmoody

  • Guest, F@H
  • 202 posts
  • 240
  • Location:Syracuse, NY

Posted 12 January 2011 - 09:08 AM

This may be the most intelligent thing I've read on here in some time. Props.

sponsored ad

  • Advert

#3 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 12 January 2011 - 10:46 AM

The thing here is that we can have as many "Einsteins" as we want. We can use the AIs to improve all aspects of life, not just scientific research, but also making businesses much more efficient. And you forget that as computer power keeps improving exponentially, so will AIs intelligences. They'll eventually become so smart that Einstein's intelligence will look like a three year old child's to them. A technological/scientific revolution then becomes inevitable.

Once we have these super smart machines showing the way, i'm sure there will be no limit to how much we'll spend with experimental research. If the government doesn't want to help, private businesses will have enough incentive to pour all the money they can afford into their R&D departments.

#4 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 12 January 2011 - 02:45 PM

As you hopefully noticed my thoughts are not based on the level of intelligence or number of AI we have available. For the sake of simplicity I just assumed that we have an (or if you want several different) incredible AI at hand, single handly doing most theoretical research currently done by humans - and of course faster, less error prone and using superior creativity. No limits assumed here, so if the AI has all the empirical evidence and prior theories at hand that Einstein had throughout his life it would replicate all his findings instantly, also avoiding some downfalls as the cosmological constant. Note that this is a big "if" as Einstein had to wait his entire life for all those experiments and observations to be successively undertaken.

My objection is, that no one, regardless of intelligence, can cheat the scientific method. If you do not confront concepts with reality they are useless for practical purposes. And exactly at this point no AI can fundamentally change the way science is done - it doesn't relieve us from our obligation to test our theories, it doesn't change our needs for considerable time and resources to do those experiments. An AI can not reasonably distinguish between competing theories without empirical evidence. Also you could assume, intelligent as it is, it will come up with a huge variety of possible concepts that can match the data or solve certain problems (as humans do today - even if the AI replaces 1000 or even all human researchers). Are the atomist theories (and which one) of ancient philosophers such as Democritus or the continuity theories of philosophers such as Aristoteles correct? Any AI in 500 BCE just couldn't tell caused by lack of empirical evidence. More often than most people realise researches also just did experiments without looking for specific concepts, discovering some odd behaviours and using this as starting point to build novel theories (quantum theory is a telling example). They have a telescope or microscope at hand, and just out of curiosity they start to point it to the night sky or a drop of water, just using their new tools to see what they can find.

We are fundamentally limited by the research resources available - whether we have an AI or not. The private industry can funnel their resources into research right now, AI plays no role in that and does not fundamentally speed up science and technology beyond that. Additional resources speed up science and technology - independend of superhuman AIs. AIs do not generate novel theories or prove them for that matter out of thin air (the same applies to human Is), and they can not generate limited research resources out of thin air. A million dollar spend on one research project is a million dollar that can not be spend on other research. Note: AIs can not cheat economics, too.

The notion of "achieve AI -> a miracle occurs/we don't care about science -> singularity is realised" is not much different from other religions, to say the least. The faster you let go of the futile singularity concept, the earlier you can support more promising causes. Donating money to actual research instead of singularity conferences could be a good start.

Edited by TFC, 12 January 2011 - 03:14 PM.


#5 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 12 January 2011 - 04:40 PM

The process leading to a singularity is already happening, and it is the most relevant part for us. However, the attainment of an actual Singularity will IMO not be of any real use to humans. An AI entity that is more intelligent than Einstein or that conducts perfect/blameless research has no place in humanity. I fail to see the point of a world where super-intelligent machines rule, and where biological humans are nothing but frail, stupid, ephemeral creations. This is not the point. I believe in human enhancement and transhumanism but that is where it must end. Going any further (to posthumanism) will mean the end of human biology and consequently, the end of emotions, feelings etc, i.e.the end of us and the beginning of an alien, cold, emotionless and pointless world.

#6 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 12 January 2011 - 07:29 PM

Maybe you could indicate what you mean with "The process leading to a singularity is already happening" ? Do you mean the singularity as promoted by Ray Kurzweil, so AI based progress that incredibly speeds up science and technology? I think my comments outline the objections concerning this quite extensively. If you have another picture of "technological singularity" in mind its best to describe it first.

Otherwise and for future reference I suggest to move the singularity discussion to this subforum:
http://www.imminst.o...ality-religion/

Edited by TFC, 12 January 2011 - 07:32 PM.


#7 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 12 January 2011 - 08:04 PM

Maybe you could indicate what you mean with "The process leading to a singularity is already happening" ? Do you mean the singularity as promoted by Ray Kurzweil, so AI based progress that incredibly speeds up science and technology? I think my comments outline the objections concerning this quite extensively. If you have another picture of "technological singularity" in mind its best to describe it first.

Otherwise and for future reference I suggest to move the singularity discussion to this subforum:
http://www.imminst.o...ality-religion/


Singularity is the final stage of a long process involving progressively more achievement in progressively less time. In the case of the technological singularity, an infinite amount of technological achievement will happen in an infinitely minimal time. But before (if ever) we reach that point (the Singularity point,) there is a process that we are now experiencing: more and more is achieved in less and less time. This process started with the industrial revolution of the 19th century, and it has been accelerating since. Whether this will lead to an actual, properly-defined Singularity, remains to be seen.

#8 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 12 January 2011 - 09:14 PM

Ok, you are not refering to the commonly promoted Kurzweil-AI-concept. Keeping the AI issue out of the equation I do not know whether the singularity defined as infinite progress in minimal time is realistically achieveable anytime in the foreseeable future. I take that you do not really mean "infinite" but progress vastly faster than today. Given that mankind aquires more and more economical means to finance research (think of the research contributions of China and India once they are fully developed) it makes sense to say that we are able to spend much more resources to research than today. As additionally technology driven intensive economic growth isn't likely to stop anytime soon we will have in 100 years maybe 5 times (or more, pure speculation at the moment) the global research resources available compared to 2010. The same is true if you compare real (not nominal!) research spending in 1900 and 2000. Of course there is some dimishing returns involved, so having twice the money available doesn't mean doubling the research speed.

Still it will take a long, long time to achieve this modified kind of singularity and it will be a comparatively slow and gradual development (I suggest that we do not call it singularity, as it has not much to do with the kind of revolutionary AI-progress of Kurzweil-singularity). Everyone can speed it up by promoting general research spendings to their local MP or donating money to actual research.



Returning to the original topic of my essay I am continue to be interested in opposing statements, preferably utilising sound refutations.

Edited by TFC, 12 January 2011 - 09:20 PM.


#9 forever freedom

  • Guest
  • 2,362 posts
  • 67

Posted 12 January 2011 - 11:00 PM

I believe in human enhancement and transhumanism but that is where it must end. Going any further (to posthumanism) will mean the end of human biology and consequently, the end of emotions, feelings etc, i.e.the end of us and the beginning of an alien, cold, emotionless and pointless world.


Nothing wrong with that as i see it. I don't see why becoming machines would kill us as we are. Sure our biology may be gone but that doesn't mean there will be no more emotions of no point in anything. Much to the contrary; we'll eliminate human frailties and keep the good parts.

#10 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 12 January 2011 - 11:24 PM

This is one of the most clueless, naive things I've read here in a long time...

So, how is the singularity-AI-technology issue supposed to work? What can a superhuman AI in fact contribute?

The singularity concept (my understanding)

As of my understanding the typical story of singularity (where a lot of people appear to be mentally fixed on to extend their lifes) goes as this:

It's all about the AI. Computers will get faster and faster, finally by far outmatching the raw processing power of the human brain – by the way, the raw processing power of our brains is not much larger than those of dolphin brains or elephant brains, so keep in mind that raw processing power does not equate intelligence.
I guess this sounds like the accelerating change version of the Singularity. That is not the only school of thought on the Singularity. see http://yudkowsky.net...ularity/schools
Also we are supposed to figure out how human brains work and what distinguishes them from similar mammalian brains to enable us to replicate it somehow in computers.
That is not the only way to design an intelligence. You don't have to try to reverse engineer the human brain. You can also try to learn the principles of intelligence on a mathematical level and build something new and clean from the ground up, which would be a more reasonable way to do it
The later point is not a small tasks and certainly much more difficult to develop than processing power, but I will leave aside the questions about the 20-30 year timeframe singularity proponents tend to assume.

So let’s assume we already have a superhuman AI established. As it is superhuman we naturally can’t understand today how exactly it will work
wrong. you are conflating not knowing what decisions and actions someone smarter than you will take, with understanding how that process actually works. I can design any number of simple programs that I have no idea how they will work.
– but remember that processing power does not equate intelligence. So we will have to manually improve our hopefully perfect understanding of the human brain and human intelligence to make it superhuman, as merely copying will not lead to superhuman AI. But as I remarked, I will leave aside those issues for the moment.
As I remarked, this is not the only way to do.

How exactly now will our superior AI cause an explosion of scientifical-technological progress, often described as overwhelming us humans? Basically it implies improving the speed of research and development 10 or 100 fold or more. Of course those results still would need to be implemented into real world production lines at sufficient speed, but again I will leave this mostly economical issue aside for the moment.
I would refer you to these sources:

Intelligence Explosion

4. The Power of Intelligence - http://yudkowsky.net/singularity/power
5. Seed AI - http://singinst.org/...OGI/seedAI.html
6. Cascades, Cycles, Insight... - http://lesswrong.com...cycles_insight/
7. ...Recursion, Magic - http://lesswrong.com...ecursion_magic/
8. Recursive Self-Improvement - http://lesswrong.com...elfimprovement/
9. Hard Takeoff - http://lesswrong.com...f/hard_takeoff/
10. Permitted Possibilities, & Locality - http://lesswrong.com...ities_locality/


My objection: AI will not speed up progress

Being trained as a physicist I tend to subscribe to an understanding of science and technology that distinguishes between theoretical and experimental research. If you want – simplified – we develop theoretical concepts, such as the quantisation of energy levels within atoms, the idea to add certain components to achieve a catalytical reaction in the chemical industry etc. Those concepts are based on already existing experiments/knowledge and theories and are not crafted out of thin air. After developing our theoretical concept we need to confront it with empirical fact, e.g. we design an experiment (or build a proof of concept prototype in more applied fields). Once this is done we can judge whether it was a good concept (i.e. real world behaved as predicted), where the weaknesses are and hopefully get some ideas how to improve the concept. Those experiments can be short and cheap one man shows – suppose we predict the outcome of mixing two chemicals: if both are cheap and abounded we just mix them, measure and observe. Also those experiments can take years and be very expensive – we might even need to build a multi-billion particle accelerator and run it for a decade or two. However, once this is done and we have the results we extended “knowledge” a bit – so we now know that it worked (or didn’t) and can consider it in future research/development.

You already guess where I am heading with this. A true superhuman AI is likely to be a superior theoretical researcher. It will outmatch Einstein in a second and most purely theoretical researchers such as economists and astrophysicists at universities around the globe will be instantly unemployed. Also it will be a considerable additional aid in guiding experimental research. However – it can not replace experimental research and development, which is the truly time and resource consuming part of scientifical-technological progress. Experimental research is constraint by time and availability of funds, meaning paying for lab workers, lab space, equipment, chemicals, energy, and whatever resources are needed in the various fields of research. It simply took time for Rutherford to setup his famous scattering experiment, letting the alpha-particles pass through the gold foil, counting the impacts, deriving the distribution and of course repeating the experiment. It took quite some resources for Eddington to undertake his 1919 Africa expedition to observe the solar eclipse, thus proving Einstein right in his general theory of relativity.

It can well be the case that superhuman AIs can design more efficient setups for experiments, e.g. optimising the travel schedule and needed observation equipment for Eddington or improving the geometry of Rutherfords experiment. But it can not fundamentally alter the fact, that it takes considerable time and considerable resources to do those experiments. This is today’s and will be tomorrow’s principal bottleneck of research in general, no matter how intelligent we are.

The important bottleneck is funding, not ideas

To speed up progress we would need to substantially increase the percentage of the GDP that is spend on research – however, this is true independed of having an AI at hand or not. Also further automatisation of various processes will help to save sparse research resources, but of course it will do so with and without an AI being developed. And for the sake of completeness let us just imagine we have very cheap (at least cheaper than human labor), flexible (say humanoid) robots at hand, working day and night in our research labs. Granted that a direct communication of the AI with the robots saves some time over a human researcher telling the robots what to do – this still by far doesn’t match the kind of accelerating progress some illusionaries of the technological singularity imagine in their dreams.

In the end an AI is of not much more use than our current human researchers and human made progress. We have plenty of ideas how we could solve certain problems or just extending knowledge for the sciences sake; we apply for government, industry or venture capital funding, hoping that among the dozends of candidates our application will be the one accepted for the grant. Maybe we are disappointed if another project is selected, because we think that our idea is in our eyes more relevant to be investigated, even if it might fail (as all to often in basic research). But even an AI can not fundamentally change our limited research resources beyond that what can already be done without any AI.
Yes, there will be some ramp-up time until it creates full-blown Drexler-style molecular nanotechnology, but at that point, *there are no more obstacles in terms of speed/resources*. Remember someone smarter than you has the power to surprise you, so you should expect to be surprised how quickly it can break down these barriers you perceive.

In the end all those bright minds wasting their time on pursuing unworthy concepts as the singularity would be better advised to lobby their local MPs to increase governmental research funding. This is probably the easiest way of accelerating progress, especially as the envisioned singularity is nothing more than a ersatz-religion for the technology affiliated crowd.
So your brilliant idea is to steal money from the actual productive people and funnel it to government science? And you think that is going to speed up progress? LOL


Edited by RighteousReason, 12 January 2011 - 11:28 PM.

  • dislike x 2

#11 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 13 January 2011 - 07:50 PM

I believe in human enhancement and transhumanism but that is where it must end. Going any further (to posthumanism) will mean the end of human biology and consequently, the end of emotions, feelings etc, i.e.the end of us and the beginning of an alien, cold, emotionless and pointless world.


Nothing wrong with that as i see it. I don't see why becoming machines would kill us as we are. Sure our biology may be gone but that doesn't mean there will be no more emotions of no point in anything. Much to the contrary; we'll eliminate human frailties and keep the good parts.


Time will tell. But I find it difficult to envisage a world where all biological systems will become redundant, and where artificial media will completely take over.

Also, human frailties are necessary in many respects (think of the fun in dealing with a challenge or with a problem, think of the satisfaction you get when you overcome a difficult patch in your life). If you don't feel thirsty, you will never enjoy the feeling of drinking a nice cool glass of water. If your shoes don't cut your feet,you will not get the satisfaction of taking them off and wiggling your toes. Small human pleasures...

#12 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 13 January 2011 - 08:46 PM

@RighteousReason

I focused on the Kurzweil version of singularity as it appears to be the most popular one. Reading the link you provided the other 2 versions don't appear to be fundamentally different, i.e. they also rely on the assumption of super-intelligence, albeit reached in different ways. The second version claims an explosion of science and technology once super-intelligence is reached (sounding even more radical than Kurzweil), while the third version doesn't tell how it should influence the real world, but presumably also due to incredible research speed).

As my argument does not rely on how and whether super-intelligence is achieved (indeed for simplicity I assume that we have super-intelligence at hand) my thoughts should be applicable to all 3 versions.

Your other links deal with the nature, limits, achievability etc. of super-intelligence/AI. They do not exactly outline how it can cheat science and economics, so I again have to repeat, that my thoughts are not based on the existence of AI - indeed I assume its existence, arguing that the AI has to deal with the same limits of research human researchers face today (the scientific method and economics aka experiments need research resources) and therefore can not incredibly speed up progress. Automatisation saves time and research money, fine, more resources speed up research, fine, but this is true with and without AI. It does not depend on the style or level of AI, especially as there are already so many human made concepts (cryonics, SENS, minduploading etc.) that lack funding - meaning the important bottleneck is funding, not ideas where to spend the resources.

Super-intelligence is no miracle machine; lacking sufficient empirical evidence it can't tell which of competing concepts is correct the same way humans can not. SI has to do the same kind of experiments, spending the same scarce resources to generate progress as humans do (or not if insuffitiently funded). Even AI/SI can spend resources only once.

I had to do a bit of read up about Drexler molecular nanotechnology (MNT). Honestly you have to admit that it is far from proven to function as envisioned. As far as I understand it the concept is a very strong assumption (= weakness) to base your whole argumentation on. It is still a theoretical concept that needs to be tested in reality, so taking it as fact is not a sound refutation. “Maybe Drexler MNT can work as envisioned” – “Or not” would be a better description.

If you are willing to discuss the Drexler topic further I would ask you to start a new thread, giving a link to it in a post - there we can exchange arguments about Drexler MNT feasibility without hijacking the AI-Singularity discussion (I have quite a couple of objections to MNT, too). For simplicities sake I will just assume that Drexler MNT is indeed functioning as described in its most recent version. Will it incredibly speed up research? What part plays AI in it?


Excursus: AI + Drexler nano-tech (if ever realised) does not speed up progress

Essentially Drexler describes molecular nano assemblers as a kind of advanced 3D printer, but instead of using the common rapid prototyping ink a variety of atoms are nano assembled to build whatever is needed. This of course doesn’t resolve scarcity of resources or eliminate research time:

You can not just use a bunch of soil from your backyard and create anything out of it – it doesn’t contain all needed elements and more important those elements present occur in strongly bounded forms, mostly as oxides. If Rutherford needs a gold foil you need gold, if you build an LCD-display you need Indium (among others) etc. So you will have to buy those scarce resources for you printer, unless you do miracle nano-nuclear fusion. Moreover you eventually will have to buy all resources if you do not intend to violate physics.

How does the assembler acquires the atoms to construct “things”? Does it “pick” it out of the soil, assembling silicon wafers and iron electromagnets? The problem is, that most elements occur not “pure” but bounded into predominantly oxides and sulfides. Your soil is mostly SiO2, in addition iron oxides, aluminium oxides etc. So to acquire the atoms the MNT has to separate oxygen from the element. Unfortunately oxygen has a very high electro-negativity (second only to flour), consequently forming very strong covalent bonds with most elements that can not be easily broken. Oxidized states especially for most metals can not be simply resolved by weak forces; the same holds for many important non-metal oxides including SiO2. The situation for the far less abundant sulfides is not significantly better. MNT, being on small scales and intending to rapidly crafting things, apparently does not employ extreme heat or the means for redox reactions to acquire the individual metal and non-metal atoms.

So we need to supply those using good old fashioned blast furnaces, electrolysis facilities and all the other non-MNT technology (and productivity). Physics doesn’t allow much alternatives to break strong bonds without using complex “wet” biochemistry. Economically we will have to pay for resources, be it the mentioned materials as gold, lab space, energy, lab workers, chemicals and even lab equipment that can not be practically assembled in MNT assemblers e.g. a laser workbench or just the door to the lab. Also note that those nano-3D-printers are often described to work in vacuum as certain nano-handles need non-reactive environments – essentially meaning even if you separate the oxygen you should be careful where it goes as it can render the assembler useless if it reacts with those components (not to mention the resources needed to maintain a desktop size vacuum chamber). In the end we have an advanced rapid prototyping device that certainly is useful, but far from limitless resources. But how does this incredibly speed up science and why do we need an AI for this?

In the end we will have to do the experiments to confront our theoretical concepts with reality. This means medical researches develop a cure for alzheimers/CVD etc. in mice – so they get 2 groups of mice (treatment and control), treat the mice, wait some weeks, killing the mice and investigate the body. They do this using equipment generated by the assembler and it saves research resources, but there is no miraculous progress involved. Physicists/engineers build the next generation particle accelerator. The assemblers provide those parts that can be readily crafted with it (small and tricky parts, not the larger structures) and run it for 20 years, discovering a new extension of quantum mechanics. They saved research resources (but not incredibly much), but no incredible progress involved. Certainly there will be fields where progress is accelerated much faster, the equipment itself being the main cost, or where manipulating things atom wise is advantageous (i.e. computer science) or where one conveniently can alter some parameters for a mechanical engineering prototype, assembling it, testing it and try another set of parameters 4 hours later.

But it does so without an AI. As an AI has to do with the same limited resources and experiment related time requirements as humans it can not incredibly speed up progress beyond what can be done by human intelligence.

Note that the economies in developed countries are predominantly service based. Some manufacturing related services would go down, together with much of manufacturing. But we should not overstate the additional resources available as “infinite”. Granted that economy wide implementation of assemblers itself would allocate to us 10 times the research resources. One should hope that some of the resources freed by MNT is indeed invested in SENS or Cryonics-research, but this is ultimately a political question and does not primarily rely on global research spendings but the motivation to allocate limited resources to life extension fields (they could do it today).


The above things are of theoretical nature and only relevant IF Drexler assemblers are possible as envisioned. Judging from my understanding of physics I even doubt that - challenge me in another thread.


@mrszeta, forever freedom

without wanting to appear unpolite I'd like to suggest to open a new topic for your desirability discussion, to keep this one focused on AI-Singularity feasibility


Everyone else (and of course you 2, too) is encouraged to provide some counters for the issues I outlined in my essay and subsequent replies; and please try to distinguish not-yet proven concepts from actual or reliably forseeable science (even SENS and space elevators are much deeper rooted into reality than Drexler MNT).

Edited by TFC, 13 January 2011 - 09:33 PM.


#13 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 13 January 2011 - 10:51 PM

There's not really an argument I can give you, I can only say that you are suffering from a serious deficit of imagination :p

Edited by RighteousReason, 13 January 2011 - 10:52 PM.

  • dislike x 3

#14 platypus

  • Guest
  • 2,386 posts
  • 240
  • Location:Italy

Posted 13 January 2011 - 11:43 PM

There's not really an argument I can give you, I can only say that you are suffering from a serious deficit of imagination :p

Well yeah that's definetely a non-argument, doh.

I'm worried about the the sanity of the created superintelligences. How can it be guaranteed that these superintelligent and probably conscious machines will stay sane if everything in them happen a billion times faster than human thought?

#15 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 14 January 2011 - 02:15 PM

There's not really an argument I can give you, I can only say that you are suffering from a serious deficit of imagination :p


You do realise that this sounds somewhat as if I just outlined that the earth wasn't created 6000 years ago in 7 days, you even see the rational behind it (scientific method) and just gone ignore it to preserve your belief system?

The irony is, that neither I, nor people in general lack imagination - indeed we have so many research ideas and concepts at hand that we impossibly can investigate them in the forseeable future, even with multiple times the research resources. Moreover as the research resources are so limited the grant giving organisations require us to detail the practical value (within relatively short time frames) of the research in our applications even for many basic research projects.


Come on, you are smart people. I don't say that my thoughts are the final wisdom in this matter, nonetheless they seriously disarm any singularity expectations. If you see flaws in my logic or can think of means to circumvent it I appreciate if you outline them. But please don't just ignore it as if you transit "singularity" from a scientifc to a religious concept.

Edited by TFC, 14 January 2011 - 02:21 PM.

  • like x 1
  • dislike x 1

#16 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 15 January 2011 - 12:28 PM

There's not really an argument I can give you, I can only say that you are suffering from a serious deficit of imagination :p


You do realise that this sounds somewhat as if I just outlined that the earth wasn't created 6000 years ago in 7 days, you even see the rational behind it (scientific method) and just gone ignore it to preserve your belief system?

The irony is, that neither I, nor people in general lack imagination - indeed we have so many research ideas and concepts at hand that we impossibly can investigate them in the forseeable future, even with multiple times the research resources. Moreover as the research resources are so limited the grant giving organisations require us to detail the practical value (within relatively short time frames) of the research in our applications even for many basic research projects.


Come on, you are smart people. I don't say that my thoughts are the final wisdom in this matter, nonetheless they seriously disarm any singularity expectations. If you see flaws in my logic or can think of means to circumvent it I appreciate if you outline them. But please don't just ignore it as if you transit "singularity" from a scientifc to a religious concept.

That was a nice way of me saying you are too dumb to imagine what it could do. But since you pressed point...
  • dislike x 5

#17 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 15 January 2011 - 02:30 PM

I apologize for my appearently somewhat rude example. Maybe I don't have the kind of imagination you have. But how does this invalidate the reasoning? It well might be the case that I did not considered an important aspect that could change the results of my thoughst. But so far no one seems to be willing (or able?) to come up with it.

Can you imagine how it can circumvent the limits of research outlined so far (as correctly remarked I am not able to)? If yes I honestly and respectfully would be interested to hear it. The same applies to everyone else. I don't say that I am happy with the results I derived so far. If possible I very much would be in favour of a kind of "singularity". But we should not go so far to delude ourselfs when there is contradicting evidence.

Edited by TFC, 15 January 2011 - 02:32 PM.


#18 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 16 January 2011 - 12:47 AM

As I already outlined, the limiting factor in research is not ideas but resources. It will therefore be sufficient to show that given the same resources and the same gaps in empirical knowledge about reality an AI can radically outperform human Is (and for that matter we are talking about the scientific community).

The 2 hypothetical approaches I can think of are

a) we already have sufficient empirical evidence to proof our theories, but the human research community overlooks so much

b) the AI is able to use the resources in a way that human researchers are unable or not willing to (maybe in combination with a) if we in addition are not even aware of this method)


These are of course just hypothesis that need to be detailed and justified to constitute sound refutations. Bear in mind that to achieve AI-driven singularity an AI research speed factor of 2 or 3 above human researchers is probably not the kind of progress envisioned.


Also note that I do not doubt that we eventually will develop post-human AI with all its ethical consequences (so IMO likely not in the next 20 years). I am just sceptical about the inevitable singularity conclusion many people seem to subscribe to. Indeed I believe focusing to get as much funding as possible for projects as SENS today is much more promising for life extensionists.

Edited by TFC, 16 January 2011 - 01:12 AM.


#19 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 16 January 2011 - 04:52 AM

the limiting factor in research is not ideas but resources

I don't buy this at all
  • like x 1
  • dislike x 1

#20 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 16 January 2011 - 05:07 AM

I believe in human enhancement and transhumanism but that is where it must end. Going any further (to posthumanism) will mean the end of human biology and consequently, the end of emotions, feelings etc, i.e.the end of us and the beginning of an alien, cold, emotionless and pointless world.

Nothing wrong with that as i see it. I don't see why becoming machines would kill us as we are. Sure our biology may be gone but that doesn't mean there will be no more emotions of no point in anything. Much to the contrary; we'll eliminate human frailties and keep the good parts.

Time will tell. But I find it difficult to envisage a world where all biological systems will become redundant, and where artificial media will completely take over.

Also, human frailties are necessary in many respects (think of the fun in dealing with a challenge or with a problem, think of the satisfaction you get when you overcome a difficult patch in your life). If you don't feel thirsty, you will never enjoy the feeling of drinking a nice cool glass of water. If your shoes don't cut your feet,you will not get the satisfaction of taking them off and wiggling your toes. Small human pleasures...

By this logic, we should be hitting ourselves on our heads with hammers, since it will feel really good when we stop. A post-human future should hold more, rather than less pleasure and meaning. Otherwise, why go there? We will do what fulfills us.

#21 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 16 January 2011 - 05:35 AM

As I already outlined, the limiting factor in research is not ideas but resources. It will therefore be sufficient to show that given the same resources and the same gaps in empirical knowledge about reality an AI can radically outperform human Is (and for that matter we are talking about the scientific community).

The 2 hypothetical approaches I can think of are

a) we already have sufficient empirical evidence to proof our theories, but the human research community overlooks so much

b) the AI is able to use the resources in a way that human researchers are unable or not willing to (maybe in combination with a) if we in addition are not even aware of this method)

These are of course just hypothesis that need to be detailed and justified to constitute sound refutations. Bear in mind that to achieve AI-driven singularity an AI research speed factor of 2 or 3 above human researchers is probably not the kind of progress envisioned.

Also note that I do not doubt that we eventually will develop post-human AI with all its ethical consequences (so IMO likely not in the next 20 years). I am just sceptical about the inevitable singularity conclusion many people seem to subscribe to. Indeed I believe focusing to get as much funding as possible for projects as SENS today is much more promising for life extensionists.

TFC, I understand what you're saying, and I think that you are correct in principle, but that you may be erring in magnitude. Perhaps you are looking at the research enterprise from a physics perspective where "Big Science" projects (large, costly, very long timeframes) are common. In the biological realm, experiments are often quick and cheap, so an AI that functions as an exceptionally brilliant research director might advance progress rapidly. If we consider the present world without a superhuman AI, we see that knowledge acquisition is accelerating due to various beneficial feedback loops. (more knowledge yields better tools and better communications systems which yields yet more knowledge...) If an AI can increase the speed of the existing process by a factor of two or three, it will compound our system of accelerating returns to an even greater degree. Without invoking any sort of religion or spiritual aspect, it's pretty clear that given a superhuman AI, we will see substantial progress in at least some areas. That progress will enhance our overall wealth, which will provide more resources to devote to the parts of science that are very resource intensive. In my view, projects like SENS should see significant funding right now, but things like AI should also be a component of our current research portfolio.
  • like x 1

#22 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 16 January 2011 - 05:53 AM

the limiting factor in research is not ideas but resources

I don't buy this at all



So you think human scientists get the funding they ask for? Meaning that the research budgets of the various funding bodies do not reflect political-economical decisions and restrictions about how much resources are allocated to social security, military, purchase of goods, services etc. and finally research? But rather the first decision in budget planning is giving scientists all the research resources they want and all remaining resources are allocated once the needs of the scientists are sattisfied? And that scientists could not very significantly speed up research if they would get multiple times their current funding? Why then is it even necessary to compete with so many other researchers for research grants?

For laser physics (and many other physical fields) I guarantee that people do not struggle about how to spend their money. Also I did not had the impression that Aubrey or E. Drexler or the whole stem cell community complained about getting too much funding allocated, lacking reasonable ideas how to spend it. Indeed most scientists are fed up with the extensive application processes and regulations that are enforced to allocate the limited funding - throughout the sciences. A lot of group leaders directly and indirectly spend virtually half their time organising funding for their research.

I therefore would like to ask whether you could explain more in detail your doubts about the assumption that the important bottleneck is funding.

#23 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 16 January 2011 - 12:08 PM

I am not sure whether I agree. Just to clarify, I do not oppose AI-research and I believe that (if correctly implemented - FAI etc.) it yields substantial benefits for research.

TFC, I understand what you're saying, and I think that you are correct in principle, but that you may be erring in magnitude. Perhaps you are looking at the research enterprise from a physics perspective where "Big Science" projects (large, costly, very long timeframes) are common. In the biological realm, experiments are often quick and cheap, so an AI that functions as an exceptionally brilliant research director might advance progress rapidly.


I would agree with this if the AI had unlimited research resources at hand – not literally, but in a sense that as soon as it request funding it is granted in whatever amount desired. Otherwise it would spend the funding just faster, running dry after 6 month of research on an annual budget (which is normally already not what a scientists asked for). And while you can fund a couple of projects on such kind of “unlimited” grant it proportionally increases the number of projects not being fundet. So on an economy wide scale the AI can not change the number of projects not being fundet or being underfundet (at least not in a positive way).


If we consider the present world without a superhuman AI, we see that knowledge acquisition is accelerating due to various beneficial feedback loops. (more knowledge yields better tools and better communications systems which yields yet more knowledge...)


I would bet that this acceleration is more a function of global research spendings than the “level” of accumulated knowledge (however you measure it). So if you plot research progress from 1900-2000 against the funding increases and against “accumulated knowledge” the correlation will favour resources spend in the respective fields.


If an AI can increase the speed of the existing process by a factor of two or three, it will compound our system of accelerating returns to an even greater degree. Without invoking any sort of religion or spiritual aspect, it's pretty clear that given a superhuman AI, we will see substantial progress in at least some areas. That progress will enhance our overall wealth, which will provide more resources to devote to the parts of science that are very resource intensive.


The question is how the AI can achieve even this, sciences/economy wide and beyond that what can be already done with human I (in addition I am pretty sure that just doubling it is not what singularity people have in mind). There might be areas that indeed are not primarily slowed down due to funding, so people are just not able to connect the dots (meaning category a) above) – so if you can convincingly argue this is the case and that post-human AI can achieve very substantial progress there and that this will lead to massive additional resources economy wide (e.g. from a singularity perspective a lasting massive jump in per capita growth rates) this might be a valid argument. But also keep in mind that politics still need to allocate those resources into research funding instead of the people spending it on goods and services.


In my view, projects like SENS should see significant funding right now, but things like AI should also be a component of our current research portfolio.


AI research shouldn’t require the same amount of funding that SENS needs (correct me if I am wrong), so I have no problem with that. I am just upset how people set their mind into a “singularity” as if it arrives and immediately enables all kinds of technology including SENS. Literally thinking “why should I care about funding for regenerative medicine today, when the singularity will automatically achieve it in 30 years”. Some of them even think that mind uploading (though still a debatable concept) is going to be available immediately and thus do note care about regenerative medicine at all. This at a time where SENS and similar projects are far from being mainstream, struggle to get any funding and need every support available at least from transhumanists and life extensionists to survive.

Edited by TFC, 16 January 2011 - 12:22 PM.


#24 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 16 January 2011 - 06:35 PM

Dude like I said I'm not here to argue with you, just to troll you for being fucking clueless.
  • dislike x 3

#25 Elus

  • Guest
  • 793 posts
  • 723
  • Location:Interdimensional Space

Posted 17 January 2011 - 12:19 AM

A truly superior AI may be able to design experiments that are far more efficient in terms of cost, resources, and manpower than the experiments we currently have to test a hypothesis.

Think of an experiment like, the LHC, but perhaps the AI thinks of a way to accelerate the particles in the space of a few feet instead of a kilometer.

It seems to me a little naive to speculate about the abilities of a superior AI when we don't even know what superior truly means.

#26 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 17 January 2011 - 02:18 AM

Well, you picked a bad example as for their investigations they simply need particles that are extremly close to light speed and within the current laws of physics there are not many alternatives besides this type of accelerator.


However, the "softer" the science and the laws that govern it are I suppose, the more likely it is that scientists overlook things. So while I doubt that there is much that a Super Intelligence can do to come up with less resource intensive solutions for problems as extreme particle accelerations I guess that in chemistry and even more in biology or mechanical engeneering there are some merits to be earned by SI.

Of course its hard to speculate whether this can solve the fundamental scarcity of resources (or making human-level ideas the scarce factor instead of funding). By filling gaps in theoretical knowledge and increasing efficiency there are likely resources to be saved (and spend on other research), but I am not sure whether this will satisfy any kind of singularity as arguably human Is wouldn't have problems spending even tripple the amount of funding they currently have. Also demonstrated by the fact, that real global research funding has multiplied in the last hundred years (roughly estimated by a factor greater than 7) and people are still complaining that they are underfundet. This way AIs appear to be more supplemental to current research methods, but not necessarily resolving scarce resources.


Please note that I always emphasize the additional benefits to be gained by Super Intelligence beyond what can already be done by "normal" Intelligence. And if SI due to an inherent ability can not reverse the relation of excess "normal" Intelligence ideas and scarce resources it can not radically outperform those NIs.

In fact I start to believe that the terminal technology for vastly accelerated research is not Super Intelligence. SI can be a supplemental factor once such a flow of resources is established, so that researchers get all of the funding they desire.

Imagine todays researchers had unlimited supply of resources. So Aubrey would get 2 billion USD worth of research resources to do SENS research (or more if he thinks that it can speed up research). The CERN people would get funding to build 2 (or more) additional copies of the LHC to improve the probability of measuring their targets by 200% (so finishing the job in 1/3 or proportionally less of the time). With unlimited funding human researchers could not only do the numerous projects they are currently unable to do or scale up underfundet projects (and for scientists a project always can need more funds), but they could do an incredible parallelization of research. We would see so many breakthroughs each year that it arguably would already come close to singularity like conditions.

The question for me is, which technologies will make incredible amounts of resources available - to fundamentally alter the scarcity of resources. When this technology is identified we should invest our currently scarce resources into those research fields, regardless of having an AI at hand or not. If we do not have this technology every intelligence, human or post-human, is fundamentally limited by resources. Yes, AI can do some experiments more efficient or come up with alternative cheaper solutions, but this alone doesn't change the fundamental scarcity of resources.

But unless humans themselfs are unable to develop this kind of technology to do unlimited human-I-level research, the additional benefits of an AI will not fundamentally alter the picture.

Edited by TFC, 17 January 2011 - 02:26 AM.


#27 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 17 January 2011 - 03:49 AM

I just looked up some US numbers, if we assume a constant fraction of the GDP spend on research and development in the US in the 20th century (certainly not the case, but as rough approximation) we can see how much funding increased during that those 100 years. With real (deflated to costant 2000 USD) GDP of

1900 376 Bill. USD

2000 9817 Bill. USD


the domestic funding would have multiplied roughly 26-fold in 100 years (but keep in mind the different purchasing power parity in 1900 and 2000) - and no end of scarcity in sight. Global GDP is a bit harder to interpret, as many countries as China and India did virtually no research in 1900 but of course do so today. Global numbers deflated to 1990 USD

1900 1103 Bill. USD

2000 41017 Bill. USD

Edited by TFC, 17 January 2011 - 03:52 AM.


#28 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 17 January 2011 - 08:12 AM

Biology has the problem of having way too much information for us to be able to deal with it, vs. looking for a needle in a haystack for the piece of info we need. I expect this trend will grow as imaging and biosensors continue to improve. A powerful AI could do wonders with just our current data. By the time a powerful singularity-inducing AI arrives, who knows what will be possible. Also, many of the tools used are not inherently expensive to build, but the cost of the eureka moment is high. The interdisciplinary ability of a powerful AI would overcome many barriers.

Even if we talk about ideas and funding, ignoring the above, the powerful AI may not have to take each idea one step at a time. It may be able to predict what would happen if its hypothesis were true many steps down the line. Maybe an analogy would be: compare an amateur at chess vs. a grandmaster.

At some point, I think your idea would be correct, but I do not think our world would look the same by the time that point is reached.

Of course, this all assumes that we can be build a powerful AI that behaves in the manner we would hope.

Edited by Athanasios, 17 January 2011 - 08:28 AM.


#29 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 17 January 2011 - 01:20 PM

I hope you are right. I am not a biologist so could you give some examples so I can get an image of what kind of information you are talking about (to make it a convincing argument)?

The analogy with chess and science I think is not entirely correct. Of course one can try to figure out how the theories would look like if the empirical evidence turns out to be one way or another (we often do that already). An AI will be particular good in this which certainly saves time in evaluating the final empirical data. But as long as it can not tell the nature of reality without experiments it still is limited by resources on a sciences wide scale. It either just runs dry of funding in that projectc faster or it gets additional funds to progress which in turn are taken away from other research projects (slowing them down). This is what I mean with that currently ideas how to progress are plentyful while resources are not, thus constituting the important bottleneck. It doesn't matter if you have your plans outlined 4 additional steps ahead if you are slowed by underfunding (thus have a lot of time to think anyway). E.g. if Einstein & friends would have had the empirical data from throughout their lives availabe in short sucession they would have progressed much faster. Alternatively a SI would not have progressed much faster than they did in reality.

And rest assured that strong AI will come (though not necessarily in a few decades). Humans are a living exampe that it is possible and it doesn't violate the laws of physics.


In general you are right that I can not say what will be possible due to arrival of SI. But neither can you (or Kurzweil). Its an experiment that needs to be done to observe the outcome. And if you make the extraordinary claim that it will progress on a sciences wide scale radically faster than "normal" Is - giving the same limited means - you better be prepared to argue how it can do that. Just saying "we don't know what it is capable of" or "you lack imagination" do not answer how it will alter the unbalance of resources and ideas. True, SI can progress faster in particular fields (but not LHC-type things - those are limited by fundamental physics a SI can not change) , but the resources not spend there due to slower evaluation of data are not lost but invested in other promising projects (speeding things up there). Of which there are many. Even in biology. The problem is not that we don't know how to reasonably spend our resources.

For radically faster progress we have to think about the resource side. Therefore I don't think that SI is the critical technology, but technologies to radically alter the unbalance of resources and ideas are. Investing money in automatisation and robotics I think is the best way to go. We already have fully automated ("lights off") manufacturies and once automated machines and robots replace every human worker including the service sector (important! just manufacturing is not sufficient) we can fundamentally alter the scarcity of resources (though not eleminate it). Than we could ask "You need a particle accelerator? Why not build 10 just to speed up things?". Only after we refine this critical technology SI becomes useful. And even without SI it will radically speed up research. We just have to introduce socialism so that people do not oppose being replaced by automated machines. With 90% - 95% of people structually unemployed the market mechanism will largely break down anyway.

Edited by TFC, 17 January 2011 - 01:55 PM.


sponsored ad

  • Advert

#30 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 17 January 2011 - 08:56 PM

I am not a biologist so could you give some examples so I can get an image of what kind of information you are talking about (to make it a convincing argument)?

Any data that gets digitized would be a good candidate. It would currently include analysis of genomic and epigenomic data to models based off of imaging data (proteins, cells, organs, systems, etc).

Of course one can try to figure out how the theories would look like if the empirical evidence turns out to be one way or another (we often do that already). An AI will be particular good in this which certainly saves time in evaluating the final empirical data. But as long as it can not tell the nature of reality without experiments it still is limited by resources on a sciences wide scale.

Think of having all of the final empirical data at the start but no way of knowing what it means, drowning in information. This is not 100% true in biology but true enough to make the generalization.

In general you are right that I can not say what will be possible due to arrival of SI. But neither can you (or Kurzweil). Its an experiment that needs to be done to observe the outcome. And if you make the extraordinary claim that it will progress on a sciences wide scale radically faster than "normal" Is - giving the same limited means - you better be prepared to argue how it can do that.

One of Kurzweil's premises is that we can predict our future ability to gather, digitalize, and crunch data in any field that can be considered information technology. Increasingly,in this area, finding information is not the limiting factor but understanding the information is. Biology is rapidly becoming an information technology, which is why he focuses so much on it. Any area that is an information rich complex system will provide fertile ground for AI to research. Think of weather patterns. Do you think it would be information collection that would limit our ability to understand it?

For radically faster progress we have to think about the resource side.

I think this may be true for equipment such as particle accelerators but not so for a lot of other areas of research.

Saying we are running out of resources is like saying we are running out of brains - R. A. Wilson

An AI would be a magnificent 'brain'.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users