• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Moore's Law continues to hold


  • Please log in to reply
41 replies to this topic

#1 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 January 2007 - 02:18 PM


http://www.washingto...7012700018.html

Moore's Law seen extended in chip breakthrough

By Scott Hillis
Reuters
Saturday, January 27, 2007; 1:00 AM

SAN FRANCISCO (Reuters) - Intel Corp. and IBM have announced one of the biggest advances in transistors in four decades, overcoming a frustrating obstacle by ensuring microchips can get even smaller and more powerful.


"Moore's Law of Mad Scientists: The minimum IQ required to destroy the world drops by one point every 18 months." - Eliezer Yudkowsky

#2 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 28 January 2007 - 12:12 AM

Memory is getting smaller and faster too. I strongly disagree with the last comment in this article. Back to work!

New molecular chip packs data more tightly than ever


UCLA and Caltech researchers have created the densest computer memory chip ever, a device that can comfortably hold the Declaration of Independence yet is only the size of a white blood cell.

The experimental device is nowhere near commercialization, but it demonstrates the potential of molecular manufacturing techniques, which promise to overcome the size limitations of silicon circuitry. Industry experts predict that silicon circuit density will reach the theoretical maximum in 2013, if the pace of increase holds.

The new device has about 100 billion bits of information per square centimeter — about 20 times the density of current silicon memories. Caltech chemist James Heath, who designed the circuit, said its density could probably be increased 10-fold.

The results were reported Thursday in the journal Nature.

The keys to the device are extremely fine wires fabricated in Heath's laboratory and molecular switches synthesized by UCLA chemist J. Fraser Stoddart.

The switch is a dumbbell-shaped molecule; a ring-shaped chemical around the central rod can be moved from end to end by applying a small electric voltage.

The memory chip itself is a grid of 400 parallel nanowires crosshatched by another 400. At each of the 160,000 intersections, about 100 of the nanoswitches are deposited. Each group of switches serves as a bit that can be switched off or on by the application of a current.

But the researchers still have a long way to go. Only about 30% of the junctions actually work — although it is possible to program around the defective ones — and they operate fairly slowly. Moreover, the team has not yet been able to make leads small enough to attach to each of the 800 wires.

Nonetheless, Heath said, all of these problems are potentially solvable.

"Whether it is actually possible to get this new memory circuit into a laptop, I don't know," he said.

"But we have time."



sponsored ad

  • Advert

#3 advancedatheist

  • Guest
  • 1,419 posts
  • 11
  • Location:Mayer, Arizona

Posted 28 January 2007 - 04:46 AM

It doesn't matter how fast computers get. We don't have the algorithms to run on them to make them turn into Eliezer's fantasies. Skeptic magazine a few months back published an article providing abundant evidence of AI's poor prospects despite decades of "research." A later issue published a letter by a computer scientist who had independently arrived at a similar conclusion about AI and decided not to pursue a Ph.D. in it because he wanted something to show for his life.

The whole AI idea has something fundamentally wrong with it, considering that the field started when I Love Lucy still ran as a current TV series and it has yet to produce something smart enough to disturb people.

EDIT:

AI's failure looks especially impressive considering that governments and corporations have spent billions of dollars on it and hired generations of very smart people to study it.

Edited by advancedatheist, 28 January 2007 - 05:04 AM.


#4 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 05:33 AM

I think I heard recently that one (or more) of the big car makers have
given up on trying to get a car to drive itself. This after spending billions.
This isn't even close to AI in complexity.

-Stephen

#5 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 06:06 AM

Intel's site has some interesting stuff. Some excerpts are below. -Stephen

-----------------------------------------------------------------------------------------

The fundamental building blocks for all computer chips—transistors—have tracked with Moore's Law for forty years. Intel has led the industry in transistor gate dielectric scaling using silicon dioxide (SiO2) for seven logic-process generations over the last 15 years. But as transistors shrink, leakage current can increase. Managing that leakage is crucial for reliable high-speed operation, and is becoming an increasingly important factor in chip design. Intel has made a significant breakthrough in solving the chip power problem, identifying a new material, called "high-k" to replace the transistor's silicon dioxide gate dielectric, and new metals to replace the polysilicon gate electrode of NMOS and PMOS transistors.

"High-k" stands for high dielectric constant, a measure of how much charge a material can hold. Different materials similarly have different abilities to hold charge. Imagine a sponge, which can hold a great deal of water; wood, which can hold less; and glass, which can hold none at all. Air is the reference point for this constant and has a "k" of one. "High-k" materials, such as hafnium dioxide (HfO2), zirconium dioxide (ZrO2) and titanium dioxide (TiO2) inherently have a dielectric constant or "k" above 3.9, the "k" of silicon dioxide.


http://www.intel.com...icon/high-k.htm

-------------------------------------------------------------------------------------------

Graph of Moore's Law over the last several years:

http://www.intel.com...logy/mooreslaw/

-------------------------------------------------------------------------------------------

Future Intel silicon process technologies
Intel's 65nm process technology extends our 15-year record of ramping production on a new process generation every two years and demonstrates the ability to continue delivering the benefits of Moore's Law. Intel is also well along in developing our next two process generations, 45nm and 32nm, due in 2007 and 2009, respectively. In order to maintain this cycle in the future, we continue to drive silicon research and development and make investments in fab capacity.

In June 2006, Intel researchers announced the development of improved CMOS tri-gate (3-D) transistors. In 2005, researchers from Intel and QinetiQ jointly developed prototype transistors with Indium Antimonide (InSb is a III-V compound semiconductor), which show promise for future high-speed and yet very-low-power logic applications. These transistors could be used in Intel's logic products in the second half of the next decade and could be a factor in the continuation of Moore's Law well beyond 2015.

http://www.intel.com..._technology.htm
--------------------------------------------------------------------------------------------

Tri-gate transistors are likely to play a critical role in Intel's future energy-efficient performance capabilities because they offer considerably better performance per watt than today's planar transistors. Compared to today's 65nm transistors, integrated tri-gate transistors can offer a 45 percent increase in drive current (switching speed) or 50 times reduction in off current, and a 35 percent reduction in transistor switching power.

http://www.intel.com...emonstrated.htm



Posted Image

#6 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 28 January 2007 - 07:13 AM

This is really exciting news, and even better is that we're going to see consumer CPUs built using this technology by the end of 2007. The man himself says:

"The implementation of high-k and metal gate materials marks the biggest change in transistor technology since the introduction of polysilicongate MOS transistors in the late 1960s" - Gordon Moore

Some interesting facts from Intel:

45nm Size Comparison
o A human hair = 90,000nm
o Ragweed pollen = 20,000nm
o Bacteria = 2,000nm
o Intel 45nm transistor = 45nm
o Rhinovirus = 20nm
o Silicon atom = 0.24nm

The price of a transistor in one of Intel’s forthcoming next-generation processors -- codenamed Penryn -- will be about 1 millionth the average price of a transistor in 1968. If car prices had fallen at the same rate, a new car today would cost about 1 cent.

You could fit more than 2,000 45nm transistors across the width of a human hair.

You could fit more than 30,000 45nm transistors onto the head of a pin, which measures approximately 1.5 million nm.

More than 2,000 45nm transistors could fit on the period (estimated to be approximately 0.1 millimeters or 100,000 nm in diameter) at the end of this sentence.

A 45nm transistor can switch on and off approximately 300 billion times a second. A beam of light travels less than a tenth of an inch during the time it takes a 45nm transistor to switch on and off.



#7 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 28 January 2007 - 12:07 PM

AI's failure looks especially impressive considering that governments and corporations have spent billions of dollars on it and hired generations of very smart people to study it.


I share your pessimism in regards AI to some degree however I think there's going to be plenty of opportunity for using computing technology to advance life extension over the next few decades. In fact, other than the obvious knowledge mountain we still need to climb in understanding the genome & proteome, I see it as the primary means to modeling, storing and simulating the complexity so an individual researcher's brain doesn't have to memorize hundreds of thousands of biochemical reactions, trascription factors and protein interactions.

AI in the form of genetic algorithms is already used widely in searching the genome, protein folding and computer aided molecular design and there's a plethora of opportunity to apply both linear and evolutionary algorithms to the larger -omic data sets (individually and in combination) as they are published. Where complexity overwhelms us we'll still need to fall back to brute force computing which is why computing power still needs to grow. Whether we need to continue squeezing more horsepower out of every micron is debateable as parallel algorithms running on distributed clusters is becoming an economic reality for even small groups wishing to venture into the informatics space.

I suspect we'll have collected much of the omic data within the next 2 decades. We'll spend the next 3 sorting it, searching it, organizing it and running simulations against it a myriad of fashions.

#8 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 28 January 2007 - 03:37 PM

It doesn't matter how fast computers get. We don't have the algorithms to run on them to make them turn into Eliezer's fantasies. ...abundant evidence of AI's poor prospects despite decades of "research."

The whole AI idea has something fundamentally wrong with it, considering that the field started when I Love Lucy still ran as a current TV series and it has yet to produce something smart enough to disturb people.

EDIT:

AI's failure looks especially impressive considering that governments and corporations have spent billions of dollars on it and hired generations of very smart people to study it.

Eliezer would say

Rather than thinking in terms of the "minimum" hardware "required" for Artificial
Intelligence, think of a minimum level of researcher understanding that decreases as a
function of hardware improvements.  The better the computing hardware, the less
understanding you need to build an AI.  The extremal case is natural selection, which
used a ridiculous amount of brute computational force to create human intelligence using
no understanding, only nonchance retention of chance mutations.

He would also say about the decades of research comments

confusing the speed of AI research with the speed of a real AI once
built is like confusing the speed of physics research with the speed of nuclear reactions. 
It mixes up the map with the territory.  It took years to get that first pile built, by a small
group of physicists who didn't generate much in the way of press releases.  But, once the
pile was built, interesting things happened on the timescale of nuclear interactions, not
the timescale of human discourse.  In the nuclear domain, elementary interactions happen
much faster than human neurons fire.  Much the same may be said of transistors.

and

There are also other reasons why an AI might show a sudden huge leap in intelligence. 
The species Homo sapiens showed a sharp jump in the effectiveness of intelligence, as
the result of natural selection exerting a more-or-less steady optimization pressure on
hominids for millions of years, gradually expanding the brain and prefrontal cortex,
tweaking the software architecture.  A few tens of thousands of years ago, hominid
intelligence crossed some key threshold and made a huge leap in real-world
effectiveness; we went from caves to skyscrapers in the blink of an evolutionary eye. 
This happened with a continuous underlying selection pressure - there wasn't a huge jump
in the optimization power of evolution when humans came along.  The underlying brain
architecture was also continuous - our cranial capacity didn't suddenly increase by two
orders of magnitude.  So it might be that, even if the AI is being elaborated from outside
by human programmers, the curve for effective intelligence will jump sharply.

and

AI may make an apparently sharp jump in intelligence purely as the result of
anthropomorphism, the human tendency to think of "village idiot" and "Einstein" as the
extreme ends of the intelligence scale, instead of nearly indistinguishable points on the
scale of minds-in-general.  Everything dumber than a dumb human may appear to us as
simply "dumb".  One imagines the "AI arrow" creeping steadily up the scale of
intelligence, moving past mice and chimpanzees, with AIs still remaining "dumb"
because AIs can't speak fluent language or write science papers, and then the AI arrow
crosses the tiny gap from infra-idiot to ultra-Einstein in the course of one month or some
similarly short period.  I don't think this exact scenario is plausible, mostly because I
don't expect the curve of recursive self-improvement to move at a linear creep.  But I am
not the first to point out that "AI" is a moving target.  As soon as a milestone is actually
achieved, it ceases to be "AI".  This can only encourage procrastination.



#9 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 07:03 PM

Around 1968 I read about the concept of creating a driverless car.
Now it's 39 years later and they still can't.

If I remember correctly, around 1979-80 I remember reading or hearing
about androids walking around and doing jobs around the year 2000.
================================================
From a previous post:

But I am
not the first to point out that "AI" is a moving target. As soon as a milestone is actually
achieved, it ceases to be "AI".

================================================

Maybe I don't understand what is being said here. Either a car can drive
itself or it can't, and it can't.
(By the way I'm still waiting for my flying car.)

-----------------------------------------------------------------------------------------

The driverless car is an emerging family of technologies, ultimately aimed at a full "taxi-like" experience for car users, but without a driver. Together with alternative propulsion, it is seen as the main technological advance in car technology by 2020. These projects are also referred to as an autopilot, autonomous vehicle, auto-drive car, or automated guided vehicle (AGV).

http://en.wikipedia..../Driverless_car

--------------------------------------------------------------------------------------

Sure, decade after next. Whatever the decade we're in.

-------------------------------------------------------------------------------------

Though the vision of a fully autonomous vehicle is clear, it would be such an upheaval in technology and lifestyle that few dare contemplate a 'Big Bang' new technology that would simply do it. From a scientific/engineering point of view, this looks like a case of an AI-complete problem, meaning that it is so complex that it can only be solved completely by a program that has human-level intelligence.

The social challenge is in getting people to trust the car, getting legislators to permit the car onto the public roads, and untangling the legal issues of liability for any mishaps with no person in charge.

http://en.wikipedia..../Driverless_car

------------------------------------------------------------------------------------------

After reading what I just found...

"...getting legislators to permit the car onto the public roads, and untangling the legal issues of liability for any mishaps with no person in charge."

I just don't see it in our lifetimes.

Has it ever been determined how much processing power is needed
to run an android, that can understand a little English (and therefore maybe
hold down a job)?

I wouldn't call "Make me a hamburger." or any of the
100 or so other common phrases heard in a restaurant a moving AI
target. It's not like the thing has to know all of the English language.
It's not like the thing has to adapt. There are certain fixed steps to making
a burger. I don't know about the AI field, so I am somewhat curious as to
how many gates (or transistors) they say they need. If they don't know
this, they are totally in the dark.

From a previous post:

"The better the computing hardware, the less
understanding you need to build an AI."


-Stephen

#10 jc1991

  • Guest
  • 61 posts
  • 0

Posted 28 January 2007 - 09:01 PM

The current lack of flying cars isn't a result of development problems, it's a result of marketing problems. There aren't enough people who want a flying car enough to make it practical to sell them at their current cost (which is prohibitively expensive) so the cost of production isn't going down quickly due to mass production. This means that the only way to reduce cost of production is to wait for parts to get less expensive, which takes much more time.


To address your last example, their are two types of automated food service systems. The first would involve a touch screen or keyboard, the second would involve a speech recognition engine.

The first is fairly easy to build, and has in fact been done before. As with my first example, the problem is one of marketing; very few people were willing to use the first type of system, and very few companies are willing to implement the first type of system because it limits the customers choice in food preparation and doesn't benefit the company enough to offset the customer annoyance. It's a matter of cost versus benefit, and this type of system doesn't cut it.

The second is actually deceptively difficult to get working for two reason: There are many ways of saying "make me a hamburger" and many different possible accents and tones. This means that the system does in fact have to understand most of the English language to function correctly. The system also has to be able to adapt to unusual food preparation requests in the same way a person can. The actual process of making the hamburger is easy, once the system knows that the customer wants a hamburger and how they want it prepared. The person or persons programming the system can't anticipate every possible combination of these two factors, so they have to give the system the ability to understand what is being asked of it and adapt to the situation. If it can't, it locks up or prepares the wrong thing, both of which are bad.


The problem of a fully automated car has two solutions. The first (and easiest) is to replace current roadways with smart systems that help to guide the car. This requires more processing power and a larger infrastructure, but much less programming. The second is a car that understands the rules of the road and can guide itself without external help. (Using just its own sensory information to determine where the road is, where other cars are, and where numerous other obstacles are.) This requires less overall processing power and a smaller infrastructure, but much more programming.
The second solution is were everyone is focusing their work because the first solution is unlikely to happen any time soon. Unless there is a massive reworking of governmental structure, no one is going to be given the authority to tear apart the roadway system and replace it with something else over a period of many years.

Edited by jc1991, 28 January 2007 - 09:19 PM.


#11 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 28 January 2007 - 09:22 PM

Around 1968 I read about the concept of creating a driverless car.
  Now it's 39 years later and they still can't.
...
  Maybe I don't understand what is being said here. Either a car can drive
  itself or it can't, and it can't.

It can. They're getting better every year, especially with the push given by the DARPA Grand Challenge: "The 2005 competitors were much more successful than those of 2004; only one failed to pass the 11.84 km (7.36 mile) mark set by the best-performing 2004 entry, Sandstorm. By the end, 18 robots had been disabled and five robots finished the course"

The 2007 Urban Challenge looks cool, very Mad Max:
Posted Image

And autonomous, unpiloted aircraft are coming along nicely. Even commercial passenger jets could be fully automated. Check this out:
Airliner flown 'without pilot' in UAV test

A jet airliner was flown over south-west England recently with no pilot in the cockpit, to test technology that might one day be used to control swarms of unpiloted aircraft from a single fighter jet.

Under civil aviation law, the pilot controlling the jetliner still had to be on board the aircraft. But he sat at the back of the plane using only the UAVCCI to control the large jet, along with four computer-simulated UAVs on a virtual attack mission.
The UAVCCI uses software agents to control each aircraft under its command, minimising the pilot's workload. This makes each of the UAVs semi-autonomous: they fly straight and level on their own and can be given simple orders using a point-and-click interface on what Williams calls "a simple, flat, moving map".
"The pilot only had to give top level instructions to the UAVs on where to go and what weapons to use, not fly them minute-by-minute,"



After reading what I just found...

"...getting legislators to permit the car onto the public roads, and untangling the legal issues of liability for any mishaps with no person in charge."

  I just don't see it in our lifetimes.

I don't know about that. Given the extremely high number of fatalities from human-controlled cars and trucks, I can see autonomous control technology being compulsory in the future (well, maybe in less litigous countries that is).

#12 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 09:54 PM

jc1991 wrote> The current lack of flying cars isn't a result of development problems, it's a result of marketing problems. There aren't enough people who want a flying car enough to make it practical to sell them at their current cost (which is prohibitively expensive) so the cost of production isn't going down quickly due to mass production. This means that the only way to reduce cost of production is to wait for parts to get less expensive, which takes much more time.

Stephen wrote> The flying car stuff was a joke. I have thought about this just
a little though. The thing about a flying car is that it would be
an aircraft, flying much like a helicopter. The general population
would be pilots, not drivers. Maybe in 100 years, if ever.

jc1991 wrote>There are many ways of saying "make me a hamburger" and many different possible accents and tones. This means that the system does in fact have to understand most of the English language to function correctly. The system also has to be able to adapt to unusual food preparation requests in the same way a person can. The actual process of making the hamburger is easy, once the system knows that the customer wants a hamburger and how they want it prepared.

Stephen wrote> At the *big* fast food chains every order is typed into a computer
first. The burger maker in the kitchen sees the order on another
screen and makes it.

An android wouldn't have to know verbal English in this simplified
(very simplified) setup. There are only 100 to 200 ways to make
a burger.

Words to know:

"light" "heavy" "plain" "add" "no" "extra"

"cheese" "lettuce" "pickles" "onions" "tomatoes" "mayo" "ketchup" "mustard"

"double" "well done" and maybe a few others

In reality those that work in fast food have to do a number of things,
including clean. Another big deal for an android. You also have minors.
In this case minors near machinery (a robot). Just getting a robot/android
to sweep and mop the floors (overnight) would be something. I don't *think*
they are even close to that. But like I said I'm not in this field.

-Stephen

#13 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 10:10 PM

basho wrote>It can. They're getting better every year, especially with the push given by the DARPA Grand Challenge: "The 2005 competitors were much more successful than those of 2004; only one failed to pass the 11.84 km (7.36 mile) mark set by the best-performing 2004 entry, Sandstorm. By the end, 18 robots had been disabled and five robots finished the course"

Stephen wrote> Ok, this is cars using GPS in the deserts, right? This is easy.

It ain't easy for a experienced driver to go into a new city,
like Worcester, MA. with its major highways, one-way streets,
people darting in front of you right after the light changed
to green, etc.


-Stephen

#14 advancedatheist

  • Guest
  • 1,419 posts
  • 11
  • Location:Mayer, Arizona

Posted 28 January 2007 - 10:34 PM

I notice that Eliezer qualifies his statements with a number of subjunctives (may, might), along with analogies to nuclear chain reactions and speculations about how humans became smart, neither of which necessarily explains how Colossus could wake up and take over the world. His pronouncements don't shed any more light on how this could happen than stuff a science fiction writer could have written decades ago.

#15 jc1991

  • Guest
  • 61 posts
  • 0

Posted 28 January 2007 - 10:45 PM

Stephenszpak wrote>At the *big* fast food chains every order is typed into a computer
first. The burger maker in the kitchen sees the order on another
screen and makes it.

An android wouldn't have to know verbal English in this simplified
(very simplified) setup. There are only 100 to 200 ways to make
a burger.

jc1991 wrote> It isn't just about how many possible combinations there are though. It's about the robot's ability to equate hamburger with burger with cheeseburger with hamburger pronounced with a French accent. If the robot can't do even one of those things, it can't do its job correctly.

Here's an example: I have a friend that sometimes eats at McDonalds. She almost always orders a certain combo (I can't remember the specific number) that has two hamburger patties, but she always makes sure that they only put one patty on her hamburger. There are several hundred different ways of asking for this alone, all of which must be anticipated and programmed into an AI if it is to correctly prepare a hamburger without understanding English.

#16 basho

  • Guest
  • 774 posts
  • 1
  • Location:oʎʞoʇ

Posted 28 January 2007 - 10:48 PM

Stephen wrote> Ok, this is cars using GPS in the deserts, right? This is easy.

GPS can tell you roughly where you are, your coordinates give or take a few metres. It does not give you the condition of the road surface, any upcoming obstacles, etc. These vehicles make use of a variety of sensors and algorithms in order to navigate the course. Its not a simple matter of an unobstructed paved road. Its far more complex than that.

#17 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 10:48 PM

http://www.imdb.com/...t0064177/quotes

#18 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 11:17 PM

Stephen wrote> Ok, this is cars using GPS in the deserts, right? This is easy.

GPS can tell you roughly where you are, your coordinates give or take a few metres. It does not give you the condition of the road surface, any upcoming obstacles, etc. These vehicles make use of a variety of sensors and algorithms in order to navigate the course. Its not a simple matter of an unobstructed paved road. Its far more complex than that.


Yes, but have they ever got a driverless car to go 1/2 mile, stop,
and return to where it started, in a real world situation? Traffic,
pedestrians , stop sign. The driving abilities a teenager learns in
6 hours of instruction.

Hey just found this. Robot nurses in 3 years. Funny how they can replace a nurse
but can't make a burger.

==============================================================

He said the robots could provide a valuable service guiding people around the hospital. A visitor would state the name of a patient at an information terminal and then follow a robot to the correct bedside.

If the nearest robot was not sure of a patient's location, it could seek help by communicating with others in the right area.

The robots will be fitted with sensors and cameras, allowing them to avoid collisions while travelling through wards and corridors. High-speed lanes could allow them to move from place to place quickly.

The robots would also employ face and voice recognition technology to communicate with patients and spot unauthorised visitors.

"But the human-robot interaction will be tricky, as the robots will have to be able to deal with people with different injuries and disabilities as well as the elderly and seriously ill patients," said Mr Schlegel.


http://news.scotsman...fm?id=110202007



==============================================================


-Stephen

#19 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 28 January 2007 - 11:25 PM

jc1991 wrote> It isn't just about how many possible combinations there are though. It's about the robot's ability to equate hamburger with burger with cheeseburger with hamburger pronounced with a French accent. If the robot can't do even one of those things, it can't do its job correctly.

Here's an example: I have a friend that sometimes eats at McDonalds. She almost always orders a certain combo (I can't remember the specific number) that has two hamburger patties, but she always makes sure that they only put one patty on her hamburger. There are several hundred different ways of asking for this alone, all of which must be anticipated and programmed into an AI if it is to correctly prepare a hamburger without understanding English.

Stephen wrote> What's with the accent? I guess I wasn't clear. I should have
said that the order-taker/cashier was a human.

Ok forget the burgers. What about a gas station attendent
that only accepts cash? Can they even do this?

-Stephen

#20 jc1991

  • Guest
  • 61 posts
  • 0

Posted 28 January 2007 - 11:41 PM

Stephenszpak wrote> Yes, but have they ever got a driverless car to go 1/2 mile, stop,
and return to where it started, in a real world situation? Traffic,
pedestrians , stop sign. The driving abilities a teenager learns in
6 hours of instruction.

jc1991 wrote> You have to learn to crawl before you can learn to walk. Getting a car to drive itself at all is a major achievement, when the car doesn't have the mind of a teenager.

Stephenszpak wrote> Ok forget the burgers. What about a gas station attendant
that only accepts cash? Can they even do this?

jc1991 wrote> Well, what do you want it to be able to do? If you only want it to accept payment and pump gas, that's easy but would cost more then simply hiring a human attendant. (Since you would be taking a modern gas pump and giving it an arm and the ability to find and open gas tank caps.) You would end up with a complicated camera arm attached to each pump that could scan the car, determine the position of the gas tank cap, open it, and insert the gas pump nozzle. It's possible, but would be an inane experiment because of the cost of building millions of large robotic arms. (The arms would have to be armored to some degree to prevent damage from vandals and accident. People don't have to worry about having gang symbols spray-painted onto them. People also cost less to replace if they can't work any more.)

The problem with both of your examples is that building an expensive robot to do what a cheap human already does is bad business sense, especially when many humans are perfectly willing to do the work. (Which is part of the reason so much manufacturing work has become automatic. Much of the automated work was very dangerous, but also required skill, so it was hard to get people to do the work. Making hamburgers is safe and requires little skill, so it attracts people who need the money and can't get it anywhere else. Obviously some of the work that is automated is automated because it's so boring that no one is willing to do it, but it leads to the same result.) The frequency of automation would probably increase if we found it easy to look at long term benefits, but most of us don't.

Edited by jc1991, 29 January 2007 - 12:34 AM.


#21 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 29 January 2007 - 12:35 AM

jc1991 wrote> The problem with both of your examples is that building an expensive robot to do what a cheap human already does is bad business sense, especially when many humans are perfectly willing to do the work.

Stephen wrote> So what are the costs, if you know? Paying someone to be
a janitor for 8000 hours might be $64,000. Is there any such
thing as a robot/android walking, picking up and setting down
things, etc. anywhere in use at any job?


$8 x 8000 = 64,000

(I know robots have been used by car makers for 20 or more years,
but I'm thinking about a robot/android that can move about like
a human and do a very basic repetitive task. Operating a
machine that makes something in a factory, for example. Instead
of making a new factory from scratch with robotic arms and
all the rest.)

-Stephen

#22 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 29 January 2007 - 12:39 AM

It doesn't matter how fast computers get. We don't have the algorithms to run on them to make them turn into Eliezer's fantasies. Skeptic magazine a few months back published an article providing abundant evidence of AI's poor prospects despite decades of "research." A later issue published a letter by a computer scientist who had independently arrived at a similar conclusion about AI and decided not to pursue a Ph.D. in it because he wanted something to show for his life.

The whole AI idea has something fundamentally wrong with it, considering that the field started when I Love Lucy still ran as a current TV series and it has yet to produce something smart enough to disturb people.

EDIT:

AI's failure looks especially impressive considering that governments and corporations have spent billions of dollars on it and hired generations of very smart people to study it.

AI isn't a failure. There have been countless successes (in many different ways) in AI that have very significantly contributed to the progress of humanity.


hankconn

What is the most advanced example of AI so far?

-Stephen

#23 Aegist

  • Guest Shane
  • 1,416 posts
  • 0
  • Location:Sydney, Australia

Posted 29 January 2007 - 12:47 AM

  hankconn

What is the most advanced example of AI so far?

-Stephen

There was the kid who sees dead people, in that movie "AI" and then there was the robot which saved humanity in that movie... iRobot (was that sponsored by Apple?). They are definitely the most advanced I have seen.







(before anyone gets too caught up in taking me seriously, yes i am joking)

#24 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 29 January 2007 - 01:09 AM

Google results:

One of these five next-generation units currently guides visitors to Honda’s research and domestic sales office in Wako, a suburb of Tokyo. Asimo greets visitors at the second floor reception desk, walks them (actually, it’s more of a strut) to a table in the waiting area, and disappears to retrieve a tray of coffee. Asimo can greet up to 10 visitors a day but appears to require a watchful team of underlings, perhaps to ensure he doesn’t run amok.

Honda spokesperson Yuji Hatano deflects questions about any upcoming robot products from Honda. “You have to think 20 or 30 years in the future. This company in a larger context is about mobility. Cars and motorcycles might vanish one day, like the LP player. Robots could be the next step in the mobility market.” In the meantime, robotics research helps with shorter-term goals, like the development of ultra-compact motors.

Other Japanese companies seem to agree. Though Sony shuttered its consumer robotics product line last year, Toyota recently started a robotics group and demonstrated a trumpet-playing android.


(They come soooo far. S.S.)

Aug. 2006

http://www.msnbc.msn...ewsweek/page/4/

====================================================

A contest to build a robot that can operate autonomously in urban warfare conditions, moving in and out of buildings to search and destroy targets like a human soldier, was launched in Singapore on Tuesday.

(this should be a joke, they can't even pump gas S.S.)

January 2007

http://www.newscient...-announced.html

===============================================

-Stephen

#25 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 29 January 2007 - 01:14 AM

Stephen, I think you are falling prey to this anthropomorphism.

"AI may make an apparently sharp jump in intelligence purely as the result of
anthropomorphism, the human tendency to think of "village idiot" and "Einstein" as the
extreme ends of the intelligence scale, instead of nearly indistinguishable points on the
scale of minds-in-general. Everything dumber than a dumb human may appear to us as
simply "dumb". One imagines the "AI arrow" creeping steadily up the scale of
intelligence, moving past mice and chimpanzees, with AIs still remaining "dumb"
because AIs can't speak fluent language or write science papers, and then the AI arrow
crosses the tiny gap from infra-idiot to ultra-Einstein in the course of one month or some
similarly short period."

#26 jc1991

  • Guest
  • 61 posts
  • 0

Posted 29 January 2007 - 01:21 AM

AI and robots are two different things, albeit two related things.

Stephenszpak wrote> So what are the costs, if you know? Paying someone to be
a janitor for 8000 hours might be $64,000. Is there any such
thing as a robot/android walking, picking up and setting down
things, etc. anywhere in use at any job?

jc1991 wrote> There are small examples of these things. (One of which you posted below.) Although I don't have exact numbers, it is easy enough to see that developing new technology for the purpose of opening gas tank caps or making hamburgers costs large amounts of money. It also costs money to maintain the equipment once you have it. This means that it's easier and cheaper in the short run to keep using human workers instead of spending money on developing entirely new technology to do simple things, especially when human workers are making you massive amounts of money.

#27 stephenszpak

  • Guest
  • 448 posts
  • 0

Posted 29 January 2007 - 01:44 AM

Stephen, I think you are falling prey to this anthropomorphism.

"AI may make an apparently sharp jump in intelligence purely as the result of
anthropomorphism, the human tendency to think of "village idiot" and "Einstein" as the
extreme ends of the intelligence scale, instead of nearly indistinguishable points on the
scale of minds-in-general.  Everything dumber than a dumb human may appear to us as
simply "dumb".  One imagines the "AI arrow" creeping steadily up the scale of
intelligence, moving past mice and chimpanzees, with AIs still remaining "dumb"
because AIs can't speak fluent language or write science papers, and then the AI arrow
crosses the tiny gap from infra-idiot to ultra-Einstein in the course of one month or some
similarly short period."


cnorwood19

I think I understand you.

I sometimes think in terms of Moore's Law. What can a supercomputer do? as an example.

The AI level, if there even is a concrete term for this, of a supercomputer today that occupies
X cubic feet of space should be in a computer that occupies 1 cubic foot of space in a fixed
number of years.

64 cubic feet today (drives, memory/CPU etc. ) would be 1 cubic foot in 12 years
(probably did math wrong) assuming tranisitor doubling every 24 months. So let's say
an android would use 1 cubic foot of its body space for computing. The intelligence/AI/IQ whatever,
of a supercomputer today should be in an android by 2019, (assuming the world continues like
it has). If anyone at MIT has connected (perhaps via wireless communication) a supercomputer
to a robot (that has no computing power of its own) , I think it would show where we would be
regarding robots in a fixed number of years. Make any sense?


-Stephen

#28 Athanasios

  • Guest
  • 2,616 posts
  • 163
  • Location:Texas

Posted 29 January 2007 - 02:03 AM

Since there are people working on making an artificial general intelligence, it is more likely that whoever has the best software will make the first breakthrough. Either a refinement of software, or boost in hardware, will allow the person with the best software to produce the AGI. Software is key, it seems, but eventually the brute force of hardware will make it work, even with less than optimum software.

What you seem to be talking about is a robot that has a useful narrow AI. This is totally different. I would say plenty of that exists in the form that jc1991 pointed out, automated manufacturing. There are a lot of examples of non-embodied narrow AI that are better at something than humans. Asimo appears to be a narrow AI robot. Honda gets Asimo to do human like actions to impress people that anthropomorphize him.

An AGI will become increasingly intelligent generally. It will have to have the ability to learn new things from its environments, and use what it already knows in unique environments effectively. If we measure how smart it is by effectiveness in the human world, then:

"Everything dumber than a dumb human may appear to us as
simply "dumb". One imagines the "AI arrow" creeping steadily up the scale of
intelligence, moving past mice and chimpanzees, with AIs still remaining "dumb" because AIs can't speak fluent language or write science papers, and then the AI arrow crosses the tiny gap from infra-idiot to ultra-Einstein in the course of one month or some similarly short period."

#29 Aegist

  • Guest Shane
  • 1,416 posts
  • 0
  • Location:Sydney, Australia

Posted 29 January 2007 - 02:31 AM

Intelligence doesn't magically emerge as a function of GHz.

This is very importantly true.

Else you have to believe that a finger is intelligent, and a toenail, and a crystal....

sponsored ad

  • Advert

#30 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 29 January 2007 - 03:04 AM

The problem of a fully automated car has two solutions. The problem of a fully automated car has two solutions. The first (and easiest) is to replace current roadways with smart systems that help to guide the car. This requires more processing power and a larger infrastructure, but much less programming. The second is a car that understands the rules of the road and can guide itself without external help. (Using just its own sensory information to determine where the road is, where other cars are, and where numerous other obstacles are.) This requires less overall processing power and a smaller infrastructure, but much more programming.


link to bug driven vehicles: http://www.halfbaker...cars#1033510367

Three
UAV Unmanned Aviation Vehicles flying recon planes are run from thousands of miles away, they could easily be UCV Undrivered Car Vehicles Could you pay a developing world chauffer to commute You sure could. a dollar an hour covers commuting both ways each day at current US average. that is different than an AI I'm thinking of a fresh idea

Four
Its all about how likely the passenger is to arrive living. I'm certain a person on a robot bicycle wearing a comfy massager evacutation stretcher like medics use has it way over robotic cars. collision is merely amusing

Five
robot rollerskates with a high mass base plus a bunch of vertical accelerometers to make sure your body is at the comfy range. suddenly the vehicle spacing is several times lareger plus, if you like you can have leading n trailing flak craft to bumper you


six
computer predictive highly harmless slightly amusing fun to practice as a teenager ejection seat on minimal vehicle. If there is a collision you get tossed with a big airbag to the shoulder You live. Its fun. but more gauche than changing lanes. did you see the bouncy mars rover animation

seven
The City of London fee at congested times road is hugely successful Use that to create chunks of robotic only vehicle time.

Edited by treonsverdery, 14 February 2007 - 04:54 AM.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users