• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Essay against AI-Singularity


  • Please log in to reply
53 replies to this topic

#31 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 17 January 2011 - 11:11 PM

And if you make the extraordinary claim that it will progress on a sciences wide scale radically faster than "normal" Is - giving the same limited means - you better be prepared to argue how it can do that.

You are presupposing that all research is equally valuable. In actuality, much research is redundant or pointless (in that it wouldn't be explored if the investigator was aware of currently available information). If an AI is able to monitor the overall research enterprise, it can allocate existing resources more efficiently, which is identical in effect to having more resources.

#32 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 18 January 2011 - 03:29 AM

You are presupposing that all research is equally valuable. In actuality, much research is redundant or pointless (in that it wouldn't be explored if the investigator was aware of currently available information).


Hm, certainly not all research is equally valuable. Here it might be better to distinguish more applied research and basic research (though the latter is the foundation of the former). In basic research its in principle hard to tell which research actually will be more valuable, as the outcomes are uncertain and possible applications might be many years in the future, quantum physics -> semiconductors/modern computers is a well known example. When it comes to allocation to different fields, so if we rather should fund stem cell or nuclear fusion research this is ultimately a political question (and which field they value higher). Also in competing for funding and writing research reports scientists normally clearly have to describe the (potential) contributions of their research given the background of existing knowledge (in times of Internet), so I doubt that so much basic research is redundant as funding bodies strongly dislike wasting their money.

Applied research has more often immediate commercial benefits and is predominantly done by private companies and naturally they are not as open as basic researchers about their findings. Here indeed competitors often reinvent wheels by not exchanging their research secrets with each other (well, its still capitalism). This is more a structual issue, so I am not sure whether SI-arrival will change that.

If an AI is able to monitor the overall research enterprise, it can allocate existing resources more efficiently, which is identical in effect to having more resources.


Hm, hm... if we abolish capitalism and put an AI in control of commercial research this can well be true. And if all funding bodies in their respective fields globally (China jointly with US, Russia, France...) pool their money to avoid any redundand basic research it might also work there (but in basic research to a far lesser degree due to much lower redundancies). So lets assume we save 50% of research resources in the private sector and 15% in basic research (aritrary values, I don't have exact numbers). Those resources will surely help an AI, but I am pretty sure that normal human researchers have no lack in original ideas how to spend it (= AI can not show its power play). Indeed looking back in time the (massive) real funding increases are rapidly (you could say instantainously) absorbed by the scientific community (basic research) and I find it a bold statement that a large percentage of this was going to redundand research. Hmmm.... I just realize that we could abolish capitalism right now to free the redundand applied research resources....

Any data that gets digitized would be a good candidate. It would currently include analysis of genomic and epigenomic data to models based off of imaging data (proteins, cells, organs, systems, etc).

Think of having all of the final empirical data at the start but no way of knowing what it means, drowning in information. This is not 100% true in biology but true enough to make the generalization.


hmmm.. yes, biology (the genetics branch of it) has plenty of data of genetic codes and proteins (I take it that you are mainly referring to that), but not yet worked out much how it is functioning, e.g. under which conditions are which sequences activated, what responsibilities do all those proteins have and the folding problem. This is not an aera of my expertise, but if you put it that way I agree that in the field of genetics/bioinformatics SI can be quite benefical (I am no expert, if an actual biologists disagrees or just the raw processing power is missing he/she may forgive me). Arrival of SI would very substantially aid experimental research, assuming we haven't already figured it out by then (bold assumption?).


One of Kurzweil's premises is that we can predict our future ability to gather, digitalize, and crunch data in any field that can be considered information technology. Increasingly,in this area, finding information is not the limiting factor but understanding the information is. Biology is rapidly becoming an information technology, which is why he focuses so much on it. Any area that is an information rich complex system will provide fertile ground for AI to research. Think of weather patterns. Do you think it would be information collection that would limit our ability to understand it?


One good and one bad example. In principle I see that the better you can ground a scientifical field on solid known physics (or chemistry as they have semi-hard laws in contrast to "weak/breakable" biology laws), the better you can use the information about the laws of physics to understand the behaviour of the research subjects. But our 1-week weather forecasts are indeed limited by data and raw processing power (=resources) availability. Radar, satelites and atmospheric weather probes did much to improve predictions, but the mesh is still not thight enough to get a clear picture of the small scale phenomena (which ultimately influence everything). Also the mesh in computer models is too large and influences as a small mountain range are neglected due to the pure processing requirements, despite the physical principles themself being pretty well understood in this case. Given the same data and raw processing power AI wouldn't do much better.


I think this may be true for equipment such as particle accelerators but not so for a lot of other areas of research.
Saying we are running out of resources is like saying we are running out of brains - R. A. Wilson
An AI would be a magnificent 'brain'.


I would even say its true for almost all fields of physical research. Nice quote btw, but I don't obey to arguments from authority (was he even talking about science?). If you could elaborate the issue a bit further I might be able to give a qualified reply.

Edited by TFC, 18 January 2011 - 03:39 AM.


sponsored ad

  • Advert

#33 TheEzEzz

  • Guest
  • 3 posts
  • 0

Posted 18 January 2011 - 05:39 AM

@TFC

You're right that for many types of research the bottle neck is in funding for physical experiments. There is substantial research that does not have funding as a bottleneck though. Math, a lot of theoretical physics, data mining of the flood of data from genomic studies. A lot of research into physics can be done numerically (I do computational fluid dynamics myself).

I think you also underestimate the role that intelligence plays in reducing the *need* for experiments. Einstein developed General Relativity largely without any concrete data. It had to be tested, of course, and that cost money, but you can't deny that the truly valuable contribution was the theoretical framework itself.

More importantly, it isn't fundamental research that will make AI so disruptive. It's engineering: a field where human resources *do* present the main bottleneck. Engineering new computer chips, new software, new robots, etc. The physics are all understood well enough for such pursuits to be done without any physical experiments at all.

A team of 1,000,000 AI-engineers could quickly design a fantastic new robotic apparatus for doing the lab work of 1000 poor biology grads. Lab automation is happening anyway, but it is very slow and expensive to design the systems. Massive AI teams could have huge portions of research automated very rapidly. That is to say: the current bottleneck to removing the bottlenecks in research is human resources, not funding.

#34 Elus

  • Guest
  • 793 posts
  • 723
  • Location:Interdimensional Space

Posted 18 January 2011 - 05:50 AM

Sort of funny we're talking about this. We may have an answer about how computers and AI will influence science sooner than we think.

IBM computer beats world Jeopardy champions.



Edited by Elus, 18 January 2011 - 05:53 AM.


#35 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 18 January 2011 - 01:54 PM

@TFC

You're right that for many types of research the bottle neck is in funding for physical experiments. There is substantial research that does not have funding as a bottleneck though. Math, a lot of theoretical physics, data mining of the flood of data from genomic studies. A lot of research into physics can be done numerically (I do computational fluid dynamics myself).


Well, theoretical physics does have funding as bottleneck indirectly, as it relies on empirical results to form and test theories (and as of my knowldge in fluid dyanimcs you usually do compare your numerics with actual quantitative experiments). As AI also can in principle not find exact solutions for many problems where we use numerics I am not sure whether the advantage of having simple brute force processing power isn't at least equally benefical as SI.


I think you also underestimate the role that intelligence plays in reducing the *need* for experiments. Einstein developed General Relativity largely without any concrete data. It had to be tested, of course, and that cost money, but you can't deny that the truly valuable contribution was the theoretical framework itself.


Einstein based his GTR on calculations and derivations of his STR (which in turn was based on the general development and discussion of electrodynamics, aether etc. in the years before). And yes, forming a coeherent theory is the final goal of research,so the theory itself is the valuable asset 100 years ago and today; but we only can find that out when we do those experiments. I believe experimental physics today does occupy significantly more resources than theoretical physics.


More importantly, it isn't fundamental research that will make AI so disruptive. It's engineering: a field where human resources *do* present the main bottleneck. Engineering new computer chips, new software, new robots, etc. The physics are all understood well enough for such pursuits to be done without any physical experiments at all.


Yes and no. I totally agree with software. Computer chips I am not sure. Certainly they do theoretical calculations but they definitely also do the experimental investigations to evaluate whether their new technology works as intended (maybe they do not trust the results, maybe they have too much money or do it just for fun).


A team of 1,000,000 AI-engineers could quickly design a fantastic new robotic apparatus for doing the lab work of 1000 poor biology grads. Lab automation is happening anyway, but it is very slow and expensive to design the systems. Massive AI teams could have huge portions of research automated very rapidly. That is to say: the current bottleneck to removing the bottlenecks in research is human resources, not funding.


I agree, but we do not need SI for that, but automated systems/robots (which are IMO the truely critical technologies). In principle humans are already replaceable in many occupations, including significant parts of laboratory work. It is not done because those machines and automated systems are often pretty expensive.

To put it economically ultimately human labor costs are the limit in a broader sense. Production costs (e.g. the costs of producing lab automation) are in a strict sense predominantly labor costs - so your lab eupipment has its price because many humans where involved in producing it, e.g. humans harvested the raw materials, humans shipped the material and intermediates around the globe, humans assembled/packed etc. the product + related services, e.g. security services, cleaning services, sales service, accounting, human based governmental administration costs etc. done by humans. If you can replace those humans and have an unlimited supply of automated systems/robots you have largely elimanted scarcity of research resources/resources in general without SI. (Unlimited = extremly cheap supply can be achieved by fully automising the production of automatisation/robots including required raw materials).

E.g. check out this video (humanoid robot flexibility) from 4:20 min
http://www.youtube.com/watch?v=Q3C5sc8b3xM

and this one (really amazing!)
http://www.youtube.com/watch?v=P9ByGQGiVMg

#36 TheEzEzz

  • Guest
  • 3 posts
  • 0

Posted 19 January 2011 - 03:15 AM

Well, theoretical physics does have funding as bottleneck indirectly, as it relies on empirical results to form and test theories (and as of my knowldge in fluid dyanimcs you usually do compare your numerics with actual quantitative experiments). As AI also can in principle not find exact solutions for many problems where we use numerics I am not sure whether the advantage of having simple brute force processing power isn't at least equally benefical as SI.


Yes, at some point some actual physical experiments need to be done. I would not be surprised, however, if 1,000,000 AI computational fluid dynamics scientists with a single physical lab at their disposal greatly outperformed the current fluids research community. It's hard to be sure either way though. As per whether AI would give you something brute force wouldn't: algorithmic breakthroughs can be worth more than decades of Moore's law in terms of performance. New numerical algorithms have been known to yield speedups of 3 of even 6 orders of magnitude. Those speedups are something brute force processing power alone won't buy you. Having a million AI numerical analysts chugging away looking for new algorithms, however...

I believe experimental physics today does occupy significantly more resources than theoretical physics.


I would believe that as well. And I would also believe that experimental physics has a higher marginal return than theoretical physics, precisely because it seems we have hit a wall in terms of armchair reasoning ala Einstein. Nonetheless, AI would greatly change the calculation. Currently it may be that the cost of funding an experimental researcher plus his lab is more worthwhile than the small cost of funding a theoretical researcher. With AI, however, we have to ask whether the experimental lab is more worthwhile than the number of theoretical researchers that can be run on silicon for an equal cost. This may be thousands or millions of researchers. And those researchers can all be copies of the *best researchers*. At some point they'll need to test some theories, but it seems likely that they could be incredibly more productive than today's research community with even less physical experiments to test their hypotheses.

Yes and no. I totally agree with software. Computer chips I am not sure. Certainly they do theoretical calculations but they definitely also do the experimental investigations to evaluate whether their new technology works as intended (maybe they do not trust the results, maybe they have too much money or do it just for fun).


New fabrication techniques are very experimental, I agree. I was referring to chip design. Placing billions of transistors in a way so as to maximize performance is a difficult problem, but one which (I think) can largely be done on a theoretical basis alone. Look at field programmable gate arrays (FPGAs) for instance. If you program an FPGA to implement your particular piece of software you can get *Huge* performance boosts. Yet few people do it because it's *hard* to program FPGAs. This bottleneck would be removed with hordes of AI FPGA programmers, yielding a huge speedup in software across the board. Designing a chip from scratch to implement a piece of software gets you even greater speedups, but the design costs are even more prohibitive. A horde of AI chip designers would yield even bigger speedups.

I agree, but we do not need SI for that, but automated systems/robots (which are IMO the truely critical technologies). In principle humans are already replaceable in many occupations, including significant parts of laboratory work. It is not done because those machines and automated systems are often pretty expensive.


Yes, incredibly expensive to *design*. The hardware may be expensive too, but this is usually because economy of scales hasn't kicked in. That is, Asimo isn't put together by a robot like a memory chip or a car is. But that's because it currently isn't economical to design and program the robots to build Asimo. With hordes of AI robotic researchers/engineers that need not be the case. Indeed, we already have sufficiently sophisticated robotics to construct 'universal' construction robots, by which I mean robots capable of filling essentially any role currently filled by low-wage labor. Those robots could be mass produced at cheap prices and installed in factories and farms all over the world.... except we don't have software good enough to make any use of them right now. With today's technology we can make a robotic arm with 30 degrees of freedom, but writing a program to use that arm in an intelligent and robust way is beyond us. Robotics is not limited by hardware, it's limited by software. AI would fix that.

To put it economically ultimately human labor costs are the limit in a broader sense. Production costs (e.g. the costs of producing lab automation) are in a strict sense predominantly labor costs - so your lab eupipment has its price because many humans where involved in producing it, e.g. humans harvested the raw materials, humans shipped the material and intermediates around the globe, humans assembled/packed etc. the product + related services, e.g. security services, cleaning services, sales service, accounting, human based governmental administration costs etc. done by humans. If you can replace those humans and have an unlimited supply of automated systems/robots you have largely elimanted scarcity of research resources/resources in general without SI. (Unlimited = extremly cheap supply can be achieved by fully automising the production of automatisation/robots including required raw materials).


Responding in particular to "If you can replace those humans and have an unlimited supply of automated systems/robots you have largely elimanted scarcity of research resources/resources in general without SI."

Automation can and does have profound effects on research, but you still need people doing the actual science and analysis. Science does not progress solely by means of doing experiments. The job of theoreticians is not to simply provide a constant stream of random hypotheses and experiments for experimentalists to perform in the hopes that such a random walk will eventually stumble upon better theories. In general, the more a theory is developed before it is experimentally tested the easier it becomes to falsify the theory. That is, theoretical work increases the usefulness of experiments. It may be that 'simple' automation will exhaust the potential for reducing bottlenecks in physical experiments (although I think AI will get us to that point much, much more quickly), but what AI buys us is an increase in theoretical capacity.

Imagine two groups, both with fixed resources for physical experiments. The first group is today's research community. The second group is an AI research community, with 1000 times as many researchers, all of them of the highest caliber, each running 1000 times faster than a human. Both groups are limited by resources for experiments. The second group, however, will have vasts amounts of mental capital to spend on selecting the best experiments to perform. String theory isn't testable yet? Fine, have a few thousand researchers flesh out the mathematical theory for a few hundred years (a few months real time) and see if any testable ramifications are found. Repeat for a dozen flavors of quantum gravity. Sure, at some point you've gotta do an actual experiment, but it seems to me the second group is going to have a huge leg up in choosing good experiments.

E.g. check out this video (humanoid robot flexibility) from 4:20 min
http://www.youtube.com/watch?v=Q3C5sc8b3xM

and this one (really amazing!)
http://www.youtube.com/watch?v=P9ByGQGiVMg


I still laugh every time I watch this


I cry a little inside too =O

#37 Dmitri

  • Guest
  • 841 posts
  • 33
  • Location:Houston and Chicago

Posted 21 January 2011 - 10:24 AM

There's not really an argument I can give you, I can only say that you are suffering from a serious deficit of imagination :p

Well yeah that's definetely a non-argument, doh.

I'm worried about the the sanity of the created superintelligences. How can it be guaranteed that these superintelligent and probably conscious machines will stay sane if everything in them happen a billion times faster than human thought?


It seems you have been ignored, but I agree. People here seem far too optimistic about super AIs and they haven't considered that perhaps these sentient beings may become so intelligent they might see no use for us and decide to get rid of us or in a more peaceful scenario distance themselves to build their own future ignoring us all together. Also, do you believe humans can create a security program to prevent this on an AI that's supposed to be far more intelligent than us? There's also the question of the ethical issues that could spring up if these machines are sentient wouldn't forcing them to do what we want be equivalent to slavery?

#38 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 21 January 2011 - 11:24 AM

There's not really an argument I can give you, I can only say that you are suffering from a serious deficit of imagination :p

Well yeah that's definetely a non-argument, doh.

I'm worried about the the sanity of the created superintelligences. How can it be guaranteed that these superintelligent and probably conscious machines will stay sane if everything in them happen a billion times faster than human thought?


It seems you have been ignored, but I agree. People here seem far too optimistic about super AIs and they haven't considered that perhaps these sentient beings may become so intelligent they might see no use for us and decide to get rid of us or in a more peaceful scenario distance themselves to build their own future ignoring us all together. Also, do you believe humans can create a security program to prevent this on an AI that's supposed to be far more intelligent than us?


Whoa wait a minute I completely agree here. I am not at all optimistic about the outcomes of AI. It is the most dangerous idea there is really. The fact that it will be as disruptive as I am saying it will, is exactly why it is so insanely dangerous.

Edited by RighteousReason, 21 January 2011 - 11:28 AM.


#39 Guest

  • Topic Starter
  • Guest
  • 320 posts
  • 214

Posted 06 February 2011 - 04:18 PM

I apologize for my late reply, but I was a bit busy the last weeks. I will be able to be more responsive the next weeks. Back to the issue:

Yes, at some point some actual physical experiments need to be done. I would not be surprised, however, if 1,000,000 AI computational fluid dynamics scientists with a single physical lab at their disposal greatly outperformed the current fluids research community. It's hard to be sure either way though. As per whether AI would give you something brute force wouldn't: algorithmic breakthroughs can be worth more than decades of Moore's law in terms of performance. New numerical algorithms have been known to yield speedups of 3 of even 6 orders of magnitude. Those speedups are something brute force processing power alone won't buy you. Having a million AI numerical analysts chugging away looking for new algorithms, however...


And I wouldn’t be surprised if a SI would not incredibly outperform the whole fluid dynamics community. Well, that’s not totally right. You are right that with the broad availability of computers huge progress in numerical methods were achieved as well (I would even say that the progress comes into the same order of magnitude as the progress in microelectronics). So if you compare the way they did their calculations in the 70s with todays framwork, yes I agree. But appearently the progress wasn’t as steady as the progress of processing power, which even today continues to improve according to Moore’s law. Of course I can not exclude that AI will find ways or a new foundations of math current researchers don’t even think about. But this comparison means that we just could kind of compensate for the lack of AI by increasing processing power. Not very honorable for the former… (and not comparable to genomics).

I would believe that as well. And I would also believe that experimental physics has a higher marginal return than theoretical physics, precisely because it seems we have hit a wall in terms of armchair reasoning ala Einstein. Nonetheless, AI would greatly change the calculation. Currently it may be that the cost of funding an experimental researcher plus his lab is more worthwhile than the small cost of funding a theoretical researcher. With AI, however, we have to ask whether the experimental lab is more worthwhile than the number of theoretical researchers that can be run on silicon for an equal cost. This may be thousands or millions of researchers. And those researchers can all be copies of the *best researchers*. At some point they'll need to test some theories, but it seems likely that they could be incredibly more productive than today's research community with even less physical experiments to test their hypotheses.


I would disagree with this interpretation (though primarily speaking for optics/laser). I admit that in a number of areas there is a lack of (correct/validated) theoretical models, e.g. surface-field interactions. But this is not primarily because physicists can not connect the dots in their empirical data; at least judging from the current understanding about the gaps in our knowledge. Hypothetically speaking I can of course not exclude that it nonetheless might be so – but just because we can not exclude it doesn’t mean that it has to be the case either, so it is somewhat pointless to debate this or even base SI revolution predictions on it. I take it more that in many cases the available empirical data is insufficient to develop good theoretical descriptions (given that some kinds of lasers and radiation sources such as free electron lasers are available since just a couple of years this is not too suprising). Other research objects as the amazing Bose-Einstein-Condensates are full of predictions where the issue is currently to validate those empirically and more funding is required for experimental confirmations. In case of the (STED) super-resolution microscopy the researchers even had to promote their ideas for years before actually getting funding to build it. In short, I do not see a broad stagnation caused by not connecting the dots of currently available empirical data, but more the need to test the existing theoretical concepts or to do the first time experiments you need to ground an extended theory on.
Concerning the “worthwhile” I agree that funding bodies are too fixed on requiring short term benefits out of projects; but this applies equally to experimental and theoretical projects.


New fabrication techniques are very experimental, I agree. I was referring to chip design. Placing billions of transistors in a way so as to maximize performance is a difficult problem, but one which (I think) can largely be done on a theoretical basis alone. Look at field programmable gate arrays (FPGAs) for instance. If you program an FPGA to implement your particular piece of software you can get *Huge* performance boosts. Yet few people do it because it's *hard* to program FPGAs. This bottleneck would be removed with hordes of AI FPGA programmers, yielding a huge speedup in software across the board. Designing a chip from scratch to implement a piece of software gets you even greater speedups, but the design costs are even more prohibitive. A horde of AI chip designers would yield even bigger speedups.


I am not very familiar with FPGAs; judging from a quick google search it seems that there are considerable tradeoffs compared to ASICs, even if you can program them perfectly. Anyway, in principle I agree that SI will speed up things, but in the end it I think we also have to define what you consider to be a technological singularity and if those “traditional” processing power speed ups satisfy it or if need a new kind of physics.

Yes, incredibly expensive to *design*. The hardware may be expensive too, but this is usually because economy of scales hasn't kicked in. That is, Asimo isn't put together by a robot like a memory chip or a car is. But that's because it currently isn't economical to design and program the robots to build Asimo. With hordes of AI robotic researchers/engineers that need not be the case. Indeed, we already have sufficiently sophisticated robotics to construct 'universal' construction robots, by which I mean robots capable of filling essentially any role currently filled by low-wage labor. Those robots could be mass produced at cheap prices and installed in factories and farms all over the world.... except we don't have software good enough to make any use of them right now. With today's technology we can make a robotic arm with 30 degrees of freedom, but writing a program to use that arm in an intelligent and robust way is beyond us. Robotics is not limited by hardware, it's limited by software. AI would fix that.


You seem to imply, that the costs of designing the digital model constitute the bulk of the price of industrial robots or whatever machine you employ for automatisation. That’s simply not the case (enlighten me if I am misguided). The reason why building automated factories for humanoid robots is not done is not the design costs for the required automation, but the low demand for mass producing HRs. It’s a relatively new product which still needs to be perfected. Also the problem is not (primarily) programming the degrees of freedom, but getting conceptual knowledge into the memory of robot workers. As seen in one of the videos, e.g. a cleaning robot does need to know/learn which kind of items it can pick up, leave on the ground and clean around it, what kind of pressure to use etc.

Its not different from humans, as we basically need the first 6, 7 years to learn those concepts, the control of our body and environmental interaction. Other than in case of humans of course we can copy the learned things for mass production. And as this is more a step in getting SI in the first place we unfortunately can not count on a non-existing SI to do this work for us. Leading me to my previous point that either way, robotics is our critical technology for overcoming current research limitations, and even SI will be much, much less useful without robotics provided (also robotics appears to be currently a much more feasible technology than SI).

Responding in particular to "If you can replace those humans and have an unlimited supply of automated systems/robots you have largely elimanted scarcity of research resources/resources in general without SI."

Automation can and does have profound effects on research, but you still need people doing the actual science and analysis. Science does not progress solely by means of doing experiments. The job of theoreticians is not to simply provide a constant stream of random hypotheses and experiments for experimentalists to perform in the hopes that such a random walk will eventually stumble upon better theories. In general, the more a theory is developed before it is experimentally tested the easier it becomes to falsify the theory. That is, theoretical work increases the usefulness of experiments. It may be that 'simple' automation will exhaust the potential for reducing bottlenecks in physical experiments (although I think AI will get us to that point much, much more quickly), but what AI buys us is an increase in theoretical capacity.


Yes, SI (I use the more general term of super intelligence), definitely increase theoretical research – that’s its sole point which I acknowledged since starting the discussion. However, my point was that even current theoretical knowledge has no problem at all in exhausting the ever increased research resources; and they don’t even need a stream of random hypothesis for that. But for an SI to get a much better idea of theory + massive resource savings even in those cases where funding is provided for experiments (and there is much research where funding is not even available due to limited research resources) this means that the whole human research community would simply overlook so much, leading to many/most experiments being not needed, and this at a broad range of fields within the sciences. This might be the case, but to use this as basis of your singularity argument this assumption needs to be justified.


Imagine two groups, both with fixed resources for physical experiments. The first group is today's research community. The second group is an AI research community, with 1000 times as many researchers, all of them of the highest caliber, each running 1000 times faster than a human. Both groups are limited by resources for experiments. The second group, however, will have vasts amounts of mental capital to spend on selecting the best experiments to perform. String theory isn't testable yet? Fine, have a few thousand researchers flesh out the mathematical theory for a few hundred years (a few months real time) and see if any testable ramifications are found. Repeat for a dozen flavors of quantum gravity. Sure, at some point you've gotta do an actual experiment, but it seems to me the second group is going to have a huge leg up in choosing good experiments.


The same issue as described before: maybe we overlook the obvious (or the not so obvious) – maybe not and LHC will bring us the empirical input for Higgs/Quantum gravity. And as there are currently no experiments underway explicitly for proving string theory this doesn’t contribute to the restriction on resources, i.e. money spend on proving string theory is money not spend on investigating other fields of research (minduploading etc.).


I still laugh every time I watch this

I cry a little inside too =O


This is not exactly a sober comment. Stick to the facts: in the last decade robotics, especially HRs made huge progress. Its like they figured out the principles (e.g. building the first airplane or transistor) and now are on for steady progress – can you claim the same for SI-research? If you look at the pipeline of progress you see that every year new models with advanced features are scheduled. Assuming slow progress in robotics is unjustified. Of course more funding would yield more progress, as always…
New HR models cost about 250.000 – 400.000 dollars, this on a prototype level without mass production. Mass production should bring down it to the level of an average car, at which point and with even further enhanced hard- and software it’ll be cheaper to use them even for blue collar jobs.



Look, the message I try to make is, people here tend to put way too much importance into SI/AI, which is no wonder machine. It makes no sense for transhumanists or singulairtaerians to hold back support for life extension research today, assuming that SI will do todays research in no time, assuming it generally doesn’t need empirics or normal Is are generally just very bad in connecting dots (and apparently assuming that they are still in good shape when SI is available).


As to the dangers of AI: it really depends whether we put AIs in charge of everything. Give it the controle over the US-nukes and it just might decide to pressure humans to serve its goals. Actually there is a relatively entertainig movie about this scenario http://en.wikipedia...._Forbin_Project

#40 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,042 posts
  • 2,000
  • Location:Wausau, WI

Posted 20 October 2011 - 07:44 PM

Paul Allen spoke about this issue recently: Challenging Singularity predictions.

While we suppose this kind of singularity might one day occur, we don't think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isn't just a progression of steadily increasing capability, but is in fact exponentially accelerating—what Kurzweil calls the "Law of Accelerating Returns." He writes that:


So we won't experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity ... [1]


By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.


This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can't happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.



#41 DAMABO

  • Guest
  • 181 posts
  • 4
  • Location:Mars

Posted 12 April 2012 - 09:56 PM

The process leading to a singularity is already happening, and it is the most relevant part for us. However, the attainment of an actual Singularity will IMO not be of any real use to humans. An AI entity that is more intelligent than Einstein or that conducts perfect/blameless research has no place in humanity. I fail to see the point of a world where super-intelligent machines rule, and where biological humans are nothing but frail, stupid, ephemeral creations. This is not the point. I believe in human enhancement and transhumanism but that is where it must end. Going any further (to posthumanism) will mean the end of human biology and consequently, the end of emotions, feelings etc, i.e.the end of us and the beginning of an alien, cold, emotionless and pointless world.


wrong. robots will have emotions.

#42 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 13 April 2012 - 06:44 PM

The process leading to a singularity is already happening, and it is the most relevant part for us. However, the attainment of an actual Singularity will IMO not be of any real use to humans. An AI entity that is more intelligent than Einstein or that conducts perfect/blameless research has no place in humanity. I fail to see the point of a world where super-intelligent machines rule, and where biological humans are nothing but frail, stupid, ephemeral creations. This is not the point. I believe in human enhancement and transhumanism but that is where it must end. Going any further (to posthumanism) will mean the end of human biology and consequently, the end of emotions, feelings etc, i.e.the end of us and the beginning of an alien, cold, emotionless and pointless world.


wrong. robots will have emotions.


Please explain how.

#43 DAMABO

  • Guest
  • 181 posts
  • 4
  • Location:Mars

Posted 13 April 2012 - 08:24 PM

The process leading to a singularity is already happening, and it is the most relevant part for us. However, the attainment of an actual Singularity will IMO not be of any real use to humans. An AI entity that is more intelligent than Einstein or that conducts perfect/blameless research has no place in humanity. I fail to see the point of a world where super-intelligent machines rule, and where biological humans are nothing but frail, stupid, ephemeral creations. This is not the point. I believe in human enhancement and transhumanism but that is where it must end. Going any further (to posthumanism) will mean the end of human biology and consequently, the end of emotions, feelings etc, i.e.the end of us and the beginning of an alien, cold, emotionless and pointless world.


wrong. robots will have emotions.


Please explain how.


all the things we always thought uniquely human, and we thought that would be never achieved by AI will eventually be achieved. chess, creativity, even an AI scientific researcher (recently, should be visible in one of the posts here in this forum on computer science and AI). Eventually, even emotions will become fully understood and we will be able to create AI's with emotions.
kurzweil discussed this. he said that these machines will have all the subtleties of the human brain, emotions being clearly a part of them.

#44 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 16 April 2012 - 07:10 AM


wrong. robots will have emotions.


Please explain how.


all the things we always thought uniquely human, and we thought that would be never achieved by AI will eventually be achieved. chess, creativity, even an AI scientific researcher (recently, should be visible in one of the posts here in this forum on computer science and AI). Eventually, even emotions will become fully understood and we will be able to create AI's with emotions.
kurzweil discussed this. he said that these machines will have all the subtleties of the human brain, emotions being clearly a part of them.


Kurzweil is a futurist who can say anything without the need to prove (the absence of proof is the essence of futurology). In order for artificial machines/robots to have full human emotions, they must be constructed in such a way, and with such materials that will mimic the exact make-up of organic humans, which means that they will be as frail and as vulnerable as we humans are. So, what is the point?

#45 DAMABO

  • Guest
  • 181 posts
  • 4
  • Location:Mars

Posted 16 April 2012 - 09:50 AM

wrong. robots will have emotions.


Please explain how.


all the things we always thought uniquely human, and we thought that would be never achieved by AI will eventually be achieved. chess, creativity, even an AI scientific researcher (recently, should be visible in one of the posts here in this forum on computer science and AI). Eventually, even emotions will become fully understood and we will be able to create AI's with emotions.
kurzweil discussed this. he said that these machines will have all the subtleties of the human brain, emotions being clearly a part of them.


Kurzweil is a futurist who can say anything without the need to prove (the absence of proof is the essence of futurology). In order for artificial machines/robots to have full human emotions, they must be constructed in such a way, and with such materials that will mimic the exact make-up of organic humans, which means that they will be as frail and as vulnerable as we humans are. So, what is the point?


the point is that: instead of relying on the assumption that we are so special that we can't be mimicked, we should, in accordance with what AI has achieved already, take the opposite stance, that AI will first emulate the capacities of us, and then it will enhance it. first we defined the uniquely human characteristics, and then they turned out to be achievable by AI. this pattern is going to be repeated for another few decades.
I think it takes a bigger leap in faith to trust your own extrapolations of the future than the best futurist in the world. Of course, the road is still long to match the delicacies of humans, but almost everything that people said science could never achieve are/will be eventually turned over: flight, immortality, spacecraft, AI, etc.

#46 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 16 April 2012 - 02:45 PM

[In order for artificial machines/robots to have full human emotions, they must be constructed in such a way, and with such materials that will mimic the exact make-up of organic humans, which means that they will be as frail and as vulnerable as we humans are.


Why should this be the case? Why could emotions not be implemented in software? Human thought is, at its base, a chemical process. With AI, we essentially implement thought in software, so why not other chemical (or, more accurately, partially chemical) processes?

There's no point in machines being frail and vulnerable, other than perhaps as laboratory curiosities. It will be far more likely that we will code up AIs that are very good at 'faking' emotion, so that we have an easier time relating to them.

Otherwise, what's the point of machine emotion, anyway? I don't want the robot surgeon that's operating on my brain to suddenly get panicky, or scared, or have an existential crisis. I want that robot to be cool under pressure, not emotional. Likewise with most robo/AI applications.

#47 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,042 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 April 2012 - 03:30 PM

Someone mentioned earlier in this thread - humans as a bottleneck in physical lab work, AI research, and implementation. I wonder if people are aware of how automated most of the economy already is.

Take a toothbrush for example. No human need touch that brush or the materials needed to make it until it reaches the end user.

1. The raw materials mostly come from wells that were drilled years ago (mostly by heavy machinery).

2. The oil flows automatically into pipelines (and into ships). In the case where it needs to be shipped across the ocean, there is not much need for human pilots, given omnipresent satellite navigation.

3. The oil flows to the refining plant or plastics plant which is automated from end to end.

4. The plastic or fluid flows through another pipeline or via trains to the toothbrush plant. (again in today's world the need for humans driving trains is quite minimal).

5. The toothbrush construction and packaging is 100% automated. No human hands touch the actual product.

6. The finished product is loaded onto trains or trucks (autonomous trucks are right around the corner) and shipped to stores or distribution centers. More and more distribution centers are using completely robotic storage and sorting.

7. A human hand might touch the toothbrush package when it is put on a store shelf.

8. Then you buy it.

There is a substantial amount of AI already running the toothbrush manufacturing industry from end-to-end. Only a little more and it will be completely automated. How many more industries and products will be this way next year? In a decade? I venture to guess quite a few - most of them in a couple decades - including a lot of scientific research.

Edited by Mind, 16 April 2012 - 03:32 PM.


#48 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 16 April 2012 - 06:50 PM

Damabo/Niner/Mind, I have nothing against technology, automation and AI as long as these remain our servants (i.e. they are there to enhance us, not to replace us).

I can't see the point of machines that have real emotions, although I am willing to see machines that have intelligence equal to or higher than ours. In my earlier post I mention intelligence, whereas to be more precise, I meant emotions that are derived from high intelligence. Emotional machines have no real purpose in this respect. We have enough emotions to cope with in the human world, why do we need more emotions from other sources?

#49 DAMABO

  • Guest
  • 181 posts
  • 4
  • Location:Mars

Posted 16 April 2012 - 07:25 PM

Damabo/Niner/Mind, I have nothing against technology, automation and AI as long as these remain our servants (i.e. they are there to enhance us, not to replace us).

I can't see the point of machines that have real emotions, although I am willing to see machines that have intelligence equal to or higher than ours. In my earlier post I mention intelligence, whereas to be more precise, I meant emotions that are derived from high intelligence. Emotional machines have no real purpose in this respect. We have enough emotions to cope with in the human world, why do we need more emotions from other sources?


It seems like you think emotions are useless. They are not. They can signal danger (fear), they can wake you up. Feeling good is also contingent on praise, and praise is contingent on good deeds; therefore it is very useful to society. Emotions are very important for cooperating and moral behavior, but most of all, to survive (fear). If emotions were indeed useless, why would we want them anyway? The fact that people detest an emotionless world should be indicative of its utility. So, if one plans to merge with an AI, you'd better pick one that has survival capabilities. If, however, you want to create an AI that is purely made for one function and you operate it yourself, as your slave, then you'd better pick one without emotions.

Edited by DAMABO, 16 April 2012 - 07:28 PM.


#50 Marios Kyriazis

  • Guest
  • 466 posts
  • 255
  • Location:London UK

Posted 16 April 2012 - 07:28 PM

It seems like you think emotions are useless. They are not. They can signal danger (fear), they can wake you up. Feeling good is also contingent on praise, and praise is contingent on good deeds; therefore it is very useful to society. Emotions are very important for cooperating and moral behavior, but most of all, to survive (fear). If emotions were indeed useless, why would we want them anyway? The fact that people detest an emotionless world should be indicative of its utility.


Emotions are useful in humans. They are useless in machines. That has been my point all along.
  • like x 1

#51 DAMABO

  • Guest
  • 181 posts
  • 4
  • Location:Mars

Posted 16 April 2012 - 07:31 PM

It seems like you think emotions are useless. They are not. They can signal danger (fear), they can wake you up. Feeling good is also contingent on praise, and praise is contingent on good deeds; therefore it is very useful to society. Emotions are very important for cooperating and moral behavior, but most of all, to survive (fear). If emotions were indeed useless, why would we want them anyway? The fact that people detest an emotionless world should be indicative of its utility.


Emotions are useful in humans. They are useless in machines. That has been my point all along.


that was quite a quick response: in the meanwhile I already edited it. You did not read this line :
"So, if one plans to merge with an AI, you'd better pick one that has survival capabilities. If, however, you want to create an AI that is purely made for one function and you operate it yourself, as your slave, then you'd better pick one without emotions."

#52 steampoweredgod

  • Guest
  • 409 posts
  • 94
  • Location:USA

Posted 20 April 2012 - 11:06 AM

wrong. robots will have emotions.


Please explain how.


all the things we always thought uniquely human, and we thought that would be never achieved by AI will eventually be achieved. chess, creativity, even an AI scientific researcher (recently, should be visible in one of the posts here in this forum on computer science and AI). Eventually, even emotions will become fully understood and we will be able to create AI's with emotions.
kurzweil discussed this. he said that these machines will have all the subtleties of the human brain, emotions being clearly a part of them.


Kurzweil is a futurist who can say anything without the need to prove (the absence of proof is the essence of futurology). In order for artificial machines/robots to have full human emotions, they must be constructed in such a way, and with such materials that will mimic the exact make-up of organic humans, which means that they will be as frail and as vulnerable as we humans are. So, what is the point?


it is not materials
read stephem wolfram's book a new kind of science
computational equivalence of processes in nature

Also read
the article
it from bit

And realize that quantum physicists dream of free will is likely a delusion and superdeterminism reigns bringing about hidden variables and concreteness of GUT being digital.

Regards human brain structure, both cellular automata theory as well as fractal theory are needed to fully grasp it. The neuron is the ultimate transmitter and processor, a fractal cellular structure with changing morphology suited to its computational function.

http://www.youtube.com/watch?v=l6Bh__EAAg0]

#53 steampoweredgod

  • Guest
  • 409 posts
  • 94
  • Location:USA

Posted 20 April 2012 - 11:12 AM

Regard previous remember antenna is receiver emitter, and fractal is perfect antenna by mathematics, information transmission is the essence of information processing and control in networks such as neural networks.
more fractals

Arthur C. Clarke, 2001 author


As for some futuristic video...

regard that nuclear batteries are real stuff and can last for over a century, have no possibility of exploding or anything and can be shielded by lead.

nice futuristic video.

Edited by steampoweredgod, 20 April 2012 - 11:17 AM.


sponsored ad

  • Advert

#54 Exception

  • Guest
  • 44 posts
  • 9
  • Location:Ontario, Canada.

Posted 21 April 2012 - 03:00 AM

Paul Allen spoke about this issue recently: Challenging Singularity predictions.

While we suppose this kind of singularity might one day occur, we don't think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isn't just a progression of steadily increasing capability, but is in fact exponentially accelerating—what Kurzweil calls the "Law of Accelerating Returns." He writes that:


So we won't experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today's rate). The "returns," such as chip speed and cost-effectiveness, also increase exponentially. There's even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity ... [1]

By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can't happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.


It doesn't seem too far fetched to me. Powered flight was invented in 1903. Humans landed on the moon in 1969. That's a mere 66 year difference. I know that's an unrelated issue, but I'm just pointing out the fact that I don't find massive technological changes within the span of a few decades to be hard to believe at all.




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users