• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

China builds first nanobot?


  • Please log in to reply
50 replies to this topic

#31 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 20 April 2005 - 04:25 AM

I say we create an EMP cannon in space that can shoot a small, but powerful plast on a particular region of the ground. A square mile or two. There should be no or very few casualities. Obviously I am choosing this option because it is extremely sexy. ;))

#32 antilithium

  • Guest
  • 77 posts
  • 1
  • Location:Tucson, Arizona

Posted 20 April 2005 - 05:56 AM

Thank you for the link, Tarm.

It surprises me that nobody on this topic commented about the actual SPM (scanning probe microscope) method used be the Chinese researchers. Here’s the article from Tarm’s link:

The Shenyang Institute of Automation, a part of the Chinese Academy of Sciences, developed a prototype robot able to operate at the nanometer level. On a computer screen, one can see operator manipulating the robot to chisel out three letters ‘SIA’ on a silicon base wafer of 1_m_2_m . In another demonstration, operator accurately moved a carbon nanotube 4_m long and 100 nm thick into a carved trough on a silicon base wafer of 5_m_5_m .     

Researchers realized the technical combination of scanning movement of scanner probe microscope (SPM) and robot control technology at the nanometer level. They also established an object moving and mechanic analysis model at the same scale, and an associated 3-D process to analyze and decouple probe shape variations. Other key technologies that the team has worked out include real time probe information collection and processing, interactive control of force/visual feedbacks, and artificial/natural marker based positioning feedback control.  These technologies have upgraded robot’s operational accuracy to a nanometer level. Testing shows that its repeating positioning error is less than 5 pixels in an area of 512 pixels, with an accuracy reaching 1%. When moving a carbon nanotube, its repeating positioning accuracy reached 30 nm. In a route marker based positioning testing, the error is less than 4 nm.

What amazes viewers is that while manipulating the robot, operator can feel its visual and force variations on a real time basis. That means operator can literally feel the force pushing nano objects, and see the movement of pushed nano objects.


It seems that there was a little disinformation: the Chinese “Nano Robo Hand” is really an SPM amalgam of software and hardware enabling the user to “see" and ”feel” atom manipulation.

In fact ,the University of North Carolina preformed this very method in 1999.

The Nanomanipulator:

Scanning-probe microscopes (SPMs) allow the investigation and manipulation of surfaces down to the atomic scale. The nanoManipulator (nM) system provides an improved, natural interface to SPMs, including Scanning Tunneling Microscopes (STMs) and Atomic Force Microscopes (AFMs). The nM couples the microscope to a virtual-reality interface that gives the scientist virtual telepresence on the surface, scaled by a factor of about a million to one. It provides new ways of interacting with materials and objects at the nanometer scale, placing the scientist on the surface, in control, while an experiment is happening.


Fascinating, the moving of atoms and molecules using the tip of an SPM, has been known since researchers used it to spell IBM (38 xenon atoms) on top of a crystallized nickel surface.

http://images.google...mages/stm10.jpg

Nanotechnology is currently in its infancy. To say that China or any nation that will become a “nanopower” is somewhat premature. The existing top three contenders of “nanotech” research are:

The United States
Japan
And China

Any of these three can become the world’s first “nanopower”. However any nation, by some stroke of luck, may become the first. And that doesn’t exclude multinational corporations either.

I would bet that the first “nanopower” would actually be a corporation. Why? Look at the IT field: there was state involvement but businesses push many of its implications. A capitalist market is a great testing ground for the revision and testing of new applications. A government may support research and thus provide the technology. But a competitive open market allows the law of Darwinism to provide sufficient design and oversight.

I’m not talking about fraudulent Nanobusiness-Alliance type marketing but marketing where there is a practical application. IBM, Intel, AT&T, Texas Instruments, General Electric and many more. Any one of these businesses has the potential to market true MNT and became the first. I believe that a corporation will be responsible for the first “breakthrough”. But that is only an opinion.

Hey Susmariosep, here's an link that gives an good explanation of nanomachines and their power sources. Plus two free ebooks from the Foresight Institute: Unbounding the Future and Engines of Creation. These will give you an good introduction of nanotechnology. Just trying to help. [thumb]

#33 armrha

  • Guest
  • 187 posts
  • 0

Posted 20 April 2005 - 12:28 PM

Cut off their energy supply.
I wouldn't worry much about red nanos.
Just cut off their energy sources, the enemies of the US that is, then their nanos won't pose any threats to anyone.
That's why I like to read about how energy is being fed to nanites and how these nanites are going to process energy in order to move atoms around.
You need electricity to move atoms from water into respective molecules of hydrogen and oxygen. Plants need sunlight to do their photosynthesis.
Nanites certainly need energy to do with atoms what atoms are not yet conversant with, as in natural phenomena of chemical reactions of synthesis and decomposition whereby chemical compounds are produced or broken up.


Unfortunately, it's not that simple. Red nanos won't have like a power cord going to them. Well-designed red nanos could contain their own power supplies, like tiny bits of radioactive material scrounged from brick and mortar, or be powered off sunlight, small batteries, all kinds of different things...

sponsored ad

  • Advert

#34 Karomesis

  • Guest
  • 1,010 posts
  • 0
  • Location:Massachusetts, USA

Posted 20 April 2005 - 09:29 PM

don't get me wrong, I'm no nationalist.

I don't care about this or that alliance aside from the fact that I don't speak chinese well and frankly don't want to leave this area right now, that leaves me with the option to actually give a rats ass about the USA and its technological progress. I'll soon be a multinational anyway, but that doesn't mean I want to move to china or korea or wherever, their females countenance is burdensome to my eye, and I find their mannerisms oppressive for my tastes.

#35 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 20 April 2005 - 10:18 PM

I don't see who is 'giving' the technology in your second sentence, but if China discovered it, don't they deserve it as much as anyone? Why is specifically the 'damn chinese government' the evil empire now?


http://www.imminst.o...031

So in response, yes, let's consider them an evil empire because we don't really know, and thus go with the safe method of developing a truly friendly intelligence that we do know will safely handle MNT


edit: although you don't always have to assume they are an evil empire. i feel like they would probably not destroy the world if they got nanotech, but the idea is that it is certainly very possibe, and that scares me enough that in this situation it would be far more logical to stick with the assumption they are an evil empire. basically i'm saying lets make our assumptions based on logic, not on incomplete information about their morality.

#36 armrha

  • Guest
  • 187 posts
  • 0

Posted 21 April 2005 - 03:10 PM

edit: although you don't always have to assume they are an evil empire. i feel like they would probably not destroy the world if they got nanotech, but the idea is that it is certainly very possibe, and that scares me enough that in this situation it would be far more logical to stick with the assumption they are an evil empire. basically i'm saying lets make our assumptions based on logic, not on incomplete information about their morality.


The most logical thing to do is not to assume the worst possible outcome or the best possible outcome of the situation but the most likely outcome. We shouldn't take preventive action to limit ourselves, though we should certainly have contigency plans to deal with problems that could arise. It's also a bad idea to just drop all research and focus on a truly friendly intelligence, more practical to have our geniuses spread around the spectrum of research and have some successes, some failures, rather then drop all of our eggs into one basket and potentially have one big failure. If we all spent the next 40 years working on a friendly artificial intelligence, and didn't accomplish it, we'd feel a little silly for not having nanotech developed to rejuvanate our failing minds and bodies, wouldn't we?

#37 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 21 April 2005 - 04:42 PM

The most logical thing to do is not to assume the worst possible outcome or the best possible outcome of the situation but the most likely outcome


Exactly. We have no clue whether they will fuck up or not. That is, we have no substantial indication of what actions they will take with this technology. The indications we do have mean nothing because we don't personally know the people that would have the power, etc.

If we all spent the next 40 years working on a friendly artificial intelligence, and didn't accomplish it, we'd feel a little silly for not having nanotech developed to rejuvanate our failing minds and bodies, wouldn't we?


The basic difference in our arguments is that you feel there is a possibility that creating FAI is impossible, and I don't.

The more I have learned about it, the more my feeling that creating FAI is impossible approaches 0.
Also bare in mind that the more you know about how something works, the more accurate your predictions will be.

But technically, i concede that my knowledge is in no way absolute, and thus you do have a point.

#38 armrha

  • Guest
  • 187 posts
  • 0

Posted 22 April 2005 - 02:45 PM

I think it's perfectly possible to build an FAI. I just think it would be a tremendous waste to only work on FAI until we had it. That's a lot of overspecialization for a whole society. Research should be (and is) spread out over tons of different fields, with different bits culminating in surprise places to advance other bits. FAI may need nanobots for it's construction, or nanobots might need FAI to be managed properly. With the points I mentioned in the 'red nano' thread (which is almost identical to this one), it would be a foolish waste of resources to only focus on FAI and outlaw all nanotechnology research.

#39 Matt

  • Guest
  • 2,865 posts
  • 152
  • Location:United Kingdom
  • NO

Posted 25 April 2005 - 04:25 PM

I've seen zero confirmation of this on big nanotechnology sites..

#40 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 29 April 2005 - 09:40 AM

The most logical thing to do is not to assume the worst possible outcome or the best possible outcome of the situation but the most likely outcome. We shouldn't take preventive action to limit ourselves, though we should certainly have contigency plans to deal with problems that could arise. It's also a bad idea to just drop all research and focus on a truly friendly intelligence, more practical to have our geniuses spread around the spectrum of research and have some successes, some failures, rather then drop all of our eggs into one basket and potentially have one big failure. If we all spent the next 40 years working on a friendly artificial intelligence, and didn't accomplish it, we'd feel a little silly for not having nanotech developed to rejuvanate our failing minds and bodies, wouldn't we?


The question is not about able minds, but rather about funding. If there is enough funding to do all of what we want to do, then great. There will always be great minds willing to do it. The research that is going on today is diverse as it is numerous.

#41 asian_american

  • Guest
  • 12 posts
  • 0

Posted 02 June 2005 - 02:53 PM

My sentiments exactly,  Rogue nations posessing this technology is insane, and they need to be dealt with swiftly and brutally.

Say goodby to altruism for now, say hello to machiavellianism  [:o] "if it be of choice; it is better to be feared than loved"


Since the U.S.A. is currently the most war-starting rogue nation, it then follows that they give up any attempt to further nano-technology.

#42 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 03 June 2005 - 05:29 AM

i don't believe humanity is capable of properly using nanotechnology.

i've seen waaay too much confirmation of human stupidity and a lot of in general 'settling for less than optimal' for no logical reason in people, which inevitably would lead to destruction or oppression with the power of MNT

what i'm saying is that, yes, nanotechnology is a goal, however there are goals above nanotechnology like security, privacy, safety, etc that we are taking too big of risks on by attempting MNT at our current state

now that whole argument is basically taken in stride with reading a lot of material from the Singularity Institute. their argument is that FAI can satisfy not only our goals of security, safety, privacy, freedom, etc, but also as a byproduct nanotechnology will become much more easy to create.

from that standpoint, wasting time supporting pre-singularity nanotechnology research seems silly.

#43 justinb

  • Guest
  • 726 posts
  • 0
  • Location:California, USA

Posted 03 June 2005 - 03:34 PM

from that standpoint, wasting time supporting pre-singularity nanotechnology research seems silly.


Not unless supporting nanotech right now will help expedite the occurence of the singularity.

#44 psudoname

  • Guest
  • 116 posts
  • 0

Posted 08 June 2005 - 11:23 PM

i don't believe humanity is capable of properly using nanotechnology.

i've seen waaay too much confirmation of human stupidity and a lot of in general 'settling for less than optimal' for no logical reason in people, which inevitably would lead to destruction or oppression with the power of MNT

what i'm saying is that, yes, nanotechnology is a goal, however there are goals above nanotechnology like security, privacy, safety, etc that we are taking too big of risks on by attempting MNT at our current state


Yes, but once we have nanotech we will be able to acheve posthumanity quickly, and so it won't be in the hands of human stupidity for very long.
I agree that grey goo/nanowar is a great risk though.


now that whole argument is basically taken in stride with reading a lot of material from the Singularity Institute. their argument is that FAI can satisfy not only our goals of security, safety, privacy, freedom, etc, but also as a byproduct nanotechnology will become much more easy to create.


Why would an AI try to satisfy our goals? Humans could easily be seen as either potential threats to be defeated or as inconsequential ants to be squashed.
Altruism and empathy are human qualities evolved because cooperation gives a greater chance of survival. While a posthuman may retain some or all of these qualities ( I think I would still want to socalise as a posthuman ), I see little reason why an AI would have any qualms in killing us.


from that standpoint, wasting time supporting pre-singularity nanotechnology research seems silly.


Nanotech, AI, brain augmentation etc all would trigger the singularity. Nanotech research is only silly if one of the others will definatly come first. Which is impossible to tell right now.

-psudoname

#45 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 09 June 2005 - 03:14 AM

Why would an AI try to satisfy our goals?

(blah blah blah etc)

--

now that whole argument is basically taken in stride with reading a lot of material from the Singularity Institute

try it, it's good fun.

#46 psudoname

  • Guest
  • 116 posts
  • 0

Posted 09 June 2005 - 09:39 AM

try it, it's good fun.


I have been to the singularity institute site, it was the first site that made me realise there is a transhumanist community on the net. And it's a good site, but they are too optimistic about AI.

#47 stormheller

  • Guest
  • 100 posts
  • 1

Posted 19 August 2005 - 03:43 AM

YEAH!! Chinese pride!

#48 spiritus

  • Guest
  • 71 posts
  • 0

Posted 24 November 2005 - 04:35 AM

First person to create the nanobot will artificially inflate their countries industries while creating a 'wtfhuge' underground man of war nano-tech factory and take over the world.

Basically, the first goverment to get their hands on it, well that president will be posted all over your city, and under it labled 'conform or die', as they take over our world.

I'm just hoping it's Canada!

#49 athanatos

  • Guest
  • 46 posts
  • 0

Posted 24 November 2005 - 06:38 PM

Think we could make weapons with nanobot blades that pull atoms apart from each other? I mean if you had a sword lined with like 1,000,000,000 nanobots that were all connected to one on or off switch you could turn them on and have them set to pull apart and throw away every atom they touch. Wouldn't it be able to cut through anything then?

Edited by athanatos, 24 November 2005 - 06:58 PM.


#50 spiritus

  • Guest
  • 71 posts
  • 0

Posted 27 November 2005 - 06:49 AM

Those that say AI will simply destroy us - a fully complex, humanbased AI will supposedly have all the capabilities of us.

One thing that stops them is a beautiful thing called electromagnetic energy. We have nukes, and plenty of them. They no doubt could not care once in isolation away from earth, the question is why would we be stupid enough to allow them to do so?

And anything that realises the pure power of biological processes will not compete against us. They are to be made of silicon chips, we are to be made of constantly evolving, changing material based on DNA. No matter how fast they try to evolve, they do not have the natural ability within themselves to evolve.

Us > AI in many aspects. And by the time AI would be in the state of achieving our similar rights, we will have the ability to suit up cyborg\full computer style.

#51 th3hegem0n

  • Guest
  • 379 posts
  • 4

Posted 28 November 2005 - 02:37 AM

Aritficial Intelligence has multiple features that makes it a more viable option for Singularity take-off than Intelligence Augmentation.

First of all, technology relevant to IA is a combination of neuroscience, medical technology, computer science, and bio/nanotech. Technology relevant to AI is only computer software and hardware. Decades of beaurocracy, research grants, investment, big corporations, and technology/business development are required just as a *prerequisite* to the development of IA solutions. The only roadblock to an AGI solution is research.

Second of all, the Singularity is not an evolutionary process. The Singularity is an exponential growth curve. The curve represents increase of intelligence over time. This occurs because any intelligence with directly modifiable source code to their own intelligence process is capable of literally improving it's own design, thereby increasing it's ability to FURTHER improve and modify its design.

Third of all, IA is necessarily a very slow many decade process. AI is not necessarily a year long to a full blown Singularity. AI has the capacity to function in a virtual environment, operate directly on it's own source code (which it can literally modify on the spot, as opposed to IA which would take years for any marginal improvement), and to literally increase its subjective time frame to feel a minute for every second we feel, or faster (depending on hardware and algorithmic limitations).

Finally, nuclear or other macroscale military technology will be utterly useless against an AGI of even a human-equivalent level of intelligence. Given access to the internet, any AGI of even moderate intelligence capacity could VERY quickly turn words and program code into money, and quickly turn money into a research lab, and quickly turn a research lab into an undefeatable nanotech solution to whatever optimization process the designer left it with.

The point being that AGI is a much faster and more effective path to the Singularity than IA, while, tangentially, the first entity to initiate the Singularity has pretty much perfect control over the rest of the world, whether they are good or evil, artificial or human.




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users