• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo
- - - - -

Chat with Robin Hanson! Nov. 25th


  • Please log in to reply
22 replies to this topic

#1 Mind

  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 19 November 2007 - 10:55 PM


Considered by some as one of the top 5 transhumanist thinkers in the world, Professor Robin Hanson is currently a tenured faculty member of the very prestigious George Mason University economics department. He will join the Sunday Evening Chat this week, November 25th at 3:30 p.m. That's right Euro-immortalists, as promised, an earlier chat time for your convenience. More details to come.

Chat Room: http://www.imminst.org/chat

Sun Nov 25th
1:30 Pacific
2:30 Mountain
3:30 Central
4:30 Eastern
[timezone help]

#2 Shepard

  • Member, Director, Moderator
  • 6,360 posts
  • 932
  • Location:Auburn, AL

Posted 19 November 2007 - 10:59 PM

If you aren't familiar with Robin Hanson, here is his website (should have done this for our last two guests, too):

http://hanson.gmu.edu/home.html

#3 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 20 November 2007 - 12:29 PM

Yey! Earlier time!
Yey! Robin Hanson!
Yey!

#4 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 20 November 2007 - 09:58 PM

Hanson says he is addicted to "viewquakes", insights that dramatically change his worldview. I think I'll ask him what is shifting the ground under his worldview lately. Should be good.

#5 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 20 November 2007 - 11:37 PM

Awesome.

Everyone should check out Robin Hanson's blog http://www.overcomingbias.com/

I wonder if we could drag Eliezer Yudkowsky to one of these things.

We have before! (at least once...)

#6 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 20 November 2007 - 11:51 PM

Just to be sure, I assume this is the applicable time? Assumed New York time = eastern time....
Presumably the link provided by mind still contained the "old" start time.

#7 Harvey Newstrom

  • Guest, Advisor
  • 102 posts
  • 1
  • Location:Washington, DC & FL

Posted 21 November 2007 - 05:03 AM

This should be most interesting! Unfortunately, I will be trapped on a Delta plane at that time. [sad]

#8 Shannon Vyff

  • Life Member, Director Lead Moderator
  • 3,897 posts
  • 702
  • Location:Boston, MA

Posted 21 November 2007 - 05:48 AM

my apple is in the computer hospital getting a new hard drive... and we are traveling for 4 days... I'll try to make the chat, would love to... but man, how did people survive without their internet?

On the plus side, I was interviewed with my whole family-for Michael Arth's new movie on the emerging consciousness of transhumans. We talked cryonics, death, A.I., social issues and economic issues of ending aging--all kinds of fun stuff--he was here for 6 hours, and probably took 4 hours of footage! (all for 5 or 6 minutes in his film) but will be great to have the voice of the children it, I think :)

#9 Luna

  • Guest, F@H
  • 2,528 posts
  • 66
  • Location:Israel

Posted 21 November 2007 - 07:11 AM

but man, how did people survive without their internet?


They didn't! that's why the made it ;-)

#10 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 November 2007 - 06:29 PM

Just friendly reminder. Chat time with Professor Hanson is 3:30 p.m. this afternoon. About 3 hours from now.

#11 robinhanson

  • Guest
  • 8 posts
  • 0
  • Location:Burke, VA

Posted 25 November 2007 - 06:47 PM

I look forward to talking to you all. :smile:

#12 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 November 2007 - 09:18 PM

Chat starts soon (10 minutes)....log in here: http://www.imminst.org/chat/

#13 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 November 2007 - 10:48 PM

The chat continues at this time and Prof. Hanson will stick around as long as members have questions for him. He enjoys all transhumanist type topics

#14 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 25 November 2007 - 11:02 PM

Now talking about the ethics of living in a simulation

#15 Shepard

  • Member, Director, Moderator
  • 6,360 posts
  • 932
  • Location:Auburn, AL

Posted 26 November 2007 - 12:34 AM

Anyone who wasn't in the chat missed out on what I found to be the most interesting and best one so far. The main advantage was the angle Dr. Hanson brought that is different from the tech-approach most of us are familiar with. He graciously stayed around for three hours covering a broad range of topics. An excellent guest, and I'm glad some of our European members were able to make it due to the earlier time.

Edited by shepard, 26 November 2007 - 12:37 AM.


#16 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 26 November 2007 - 01:43 AM

Anyone who wasn't in the chat missed out on what I found to be the most interesting and best one so far. The main advantage was the angle Dr. Hanson brought that is different from the tech-approach most of us are familiar with. He graciously stayed around for three hours covering a broad range of topics. An excellent guest, and I'm glad some of our European members were able to make it due to the earlier time.


I had to split 1/2 way through. Did anyone cut and paste the transcript?

#17 Shepard

  • Member, Director, Moderator
  • 6,360 posts
  • 932
  • Location:Auburn, AL

Posted 26 November 2007 - 02:20 AM

Anyone who wasn't in the chat missed out on what I found to be the most interesting and best one so far. The main advantage was the angle Dr. Hanson brought that is different from the tech-approach most of us are familiar with. He graciously stayed around for three hours covering a broad range of topics. An excellent guest, and I'm glad some of our European members were able to make it due to the earlier time.


I had to split 1/2 way through. Did anyone cut and paste the transcript?


At the end of the chat, someone did mention that they had the transcript and would post it up.

#18 Matt

  • Guest
  • 2,862 posts
  • 149
  • Location:United Kingdom
  • NO

Posted 26 November 2007 - 01:25 PM

The chat got off to a slow start but eventually picked up and was definitely the best one. I liked the way the chat took place too! and that he stayed around for many hours. Unfortunately I was exhausted and started falling asleep after a couple hours.

#19 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 26 November 2007 - 06:14 PM

I have a log. Will post here soon.

#20 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 26 November 2007 - 08:15 PM

http://en.wikipedia....del_(economics)

Tests of macroeconomic predictions
In the late 1980s a research institute compared the twelve top macroeconomic models available at the time. They asked the designers of these models what would happen to the economy under a variety of quantitatively specified shocks, and compared the diversity in the answers (allowing the models to control for all the variability in the real world; this was a test of model vs. model, not a test against the actual outcome). Although the designers were allowed to simplify the world and start from a stable, known baseline (e.g NAIRU unemployment) the various models gave dramatically different answers. For instance, in calculating the impact of a monetary loosening on output some models estimated a 3% change in GDP after one year, and one gave almost no change, with the rest spread between.

Partly as a result of such experiments, modern central bankers no longer have as much confidence that it is possible to ‘fine-tune’ the economy as they had in the 1960s and early 1970s. Modern policy makers tend to use a less activist approach, explicitly because they lack confidence that their models will actually predict where the economy is going, or the effect of any shock upon it. The new, more humble, approach sees danger in dramatic policy changes based on model predictions, because of several practical and theoretical limitations in current macroeconomic models; in addition to the theoretical pitfalls, (listed above) some problems specific to aggregate modelling are:

Limitations in model construction caused by difficulties in understanding the underlying mechanisms of the real economy. (Hence the profusion of separate models.)
The law of Unintended consequences, on elements of the real economy not yet included in the model.
The time lag in both receiving data and the reaction of economic variables to policy makers attempts to ‘steer’ them (mostly through monetary policy) in the direction that central bankers want them to move. Milton Friedman has vigorously argued that these lags are so long and unpredictably variable that effective management of the macroeconomy is impossible.
The difficulty in correctly specifying all of the parameters (through econometric measurements) even if the structural model and data were perfect.
The fact that all the model's relationships and coefficients are stochastic, so that the error term becomes very large quickly, and the available snapshot of the input parameters is already out of date.
Modern economic models incorporate the reaction of the public & market to the policy maker's actions (through game theory), and this feedback is included in modern models (following the rational expectations revolution and Robert Lucas's critique of the optimal control concept of precise macroeconomic management). If the response to the decision maker's actions (and their credibility) must be included in the model then it becomes much harder to influence some of the variables simulated.

#21 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 18,997 posts
  • 2,000
  • Location:Wausau, WI

Posted 27 November 2007 - 12:04 AM

The Chat was awesome! Hanson is a fabulous guest, in large because he wants to hear ideas from us as much as he wants to present ideas.

He laid down the gauntlet to Imminst members to think seriously about the future and not get caught up in hype.

He also said that tech. progress is not accellerating, a point which I disputed. Hanson claims the GDP of the world has been doubling every 15 years for the last century. This would certainly seem to be an acceleration to me since we experience time linearly. He said it is linear on a log-linear graph. True, but on a linear graph it would look like acceleration (and that is how we experience things). Anyway he said that we are due for another 'quantum leap' in progress (similar to the transition from hunting to farming and from artisan to industrial revolution). He thinks AGI is the most likely candidate to take us from doubling every 15 years to doubling every 2 weeks.

He also said it is nearly useless to invest in labor (for your own personal future), because the value of labor is rapidly diminishing. It is better to just invest money in index funds or something tied to the world growth rate.

Here is the chat log:Attached File  Hanson_Chat_Log.rtf   88.15KB   151 downloads

#22 Bruce Klein

  • Guardian Founder
  • 8,794 posts
  • 242
  • Location:United States

Posted 27 November 2007 - 12:58 AM

Note: Robin Hanson chatted w/ ImmInst in Jan 2004.

#23 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 27 November 2007 - 02:17 AM

[00:57:59] <Robin> I do worry that the basement AI that takes over the world gets too large a fraction of attention from people who are willing to seriously think about teh future
[00:58:02] <Hank> why do you think the chances the basement AI chances are low? michael wilson would argue its a larger engineering challenge than some percieve it
[00:58:22] <Robin> economists have studied economic growth a lot
[00:58:23] <mitkat> damn! many of us are in the same boat as far as sharing transhumanist interests with others
[00:58:39] <Robin> the basement AI scenario is a growth scenario, and it is built on growth assumptions
[00:58:56] <Robin> mitkat, no interest at school
[00:59:09] <Hank> so hard-take off is infeasible
[00:59:14] <Hank> from an economic pov
[00:59:17] <Robin> I'd say unlikely
[00:59:18] <kanzure> Robin: But what about the transhumanist student groups at universities?
[00:59:24] <Robin> hard to prove about any such thing
[00:59:34] <Robin> kanzure, none at my univ
[00:59:43] <mitkat> heathly skepticism, lack of introduction or not available to the topics in their circles, or something else responsible Robin?
[00:59:47] <Robin> growth now is a proces of the whole world contributing
[00:59:54] <mitkat> for their disinterest at school
[00:59:58] <Robin> billions of tiny improvements spread across the world
[01:00:01] <kanzure> I've been talking on WTA and the extropian mailing list recently about spawning more research groups. Universities might be a good place to start. Lots of young, motivated students.
[01:00:16] <Robin> most industries are very dependent on the rest of the world for their improvment
[01:00:22] <kanzure> (not just for AI- many other transhumanist projects)
[01:00:42] <Robin> so the idea that one machine in a basement can outgrow the entire rest of the world put together
[01:00:52] <Robin> that seems on its face unlikely
[01:00:58] <Hank> well
[01:01:02] <A941> WHats the time in the USA when will our guest arrive?
[01:01:18] <Robin> A, your guest has been here for 2.5 hours
[01:01:24] <mitkat> lol
[01:01:30] <A941> damn
[01:01:37] <mitkat> A941, Robin's stayed much longer than asked, like a champ
[01:01:39] <Hank> if you are researching and developing new faster, smaller computers, having an AI to help speed things up
[01:01:57] <Hank> that's a positive feedback cycle that can do something
[01:02:04] <Robin> hank today the computer industry grows because the world helps
[01:02:32] <Robin> hank, we already have that feedback loop going
[01:02:42] <Robin> the world economy has all those feedback loops going
[01:02:48] <Robin> and that is why we are growing
[01:02:49] <Hank> so having 1 human worth of additional scientific output won't do any good
[01:03:05] <mitkat> Robin, is there any aspect of activism you think is the most important for a young person who wants to "do something" about spreading the idea of transhumanism, or even just raising awareness of the future?
[01:03:05] <Robin> it will add a tiny drop to the world, like the rest of us do
[01:03:20] <Hank> but a human has constant brain power
[01:03:24] <Hank> no access to source
[01:03:27] <Hank> etc
[01:03:31] <Robin> mitkat, I don't see much need for activism, actually
[01:03:36] <mitkat> we have a lot of members who want to do something to help the meme(s) and besides going to school, there must be other ways to raise consciousness
[01:03:44] <Robin> what we need is to understand the future better
[01:03:55] <Robin> and perhaps to help along some of the development paths
[01:04:44] <Robin> hank, there are dramatically diminshing returns to most improvements


...what?

I reiterate:

what?

3.1: Advantages of minds-in-general

From the standpoint of computer science it may seem like breathtaking audacity if I dare to predict any advantages for AIs in advance of their construction, given past failures. But from the standpoint of evolutionary psychology, the human mind has surprising flaws to match its surprising strengths. If discussing the potential advantages of "AIs" strikes you as too audacious, then consider what follows, not as discussing the potential advantages of "AIs", but as discussing the potential advantages of minds in general relative to humans. One may then consider separately the audacity involved in claiming that a given AI approach can achieve one of these advantages, or that it can be done in less than fifty years.

Humans definitely possess the following advantages, relative to current AIs:
We are smart, flexible, generally intelligent organisms with an enormous base of evolved complexity, years of real-world experience, and 10^14 parallelized synapses, and current AIs are not.

Humans probably possess the following advantages, relative to intelligences developed by humans on foreseeable extensions of current hardware:
Considering each synaptic signal as roughly equivalent to a floating-point operation, the raw computational power of a human is enormously in excess of any current supercomputer or clustered computing system, although Moore's Law continues to eat up this ground [Moravec98].
Human neural hardware - the wetware layer - offers built-in support for operations such as pattern recognition, pattern completion, optimization for recurring problems, et cetera; this support was added from below, taking advantage of microbiological features of neurons, and could be enormously expensive to simulate computationally to the same degree of ubiquity.
With respect to the holonically simpler levels of the system, the total amount of "design pressure" exerted by evolution over time is probably considerably in excess of the design pressure that a reasonably-sized programming team could expect to personally exert.
Humans have an extended history as intelligences; we are proven software.

Current computer programs definitely possess these mutually synergetic advantages relative to humans:
Computer programs can perform highly repetitive tasks without boredom.
Computer programs can execute complex extended tasks without making that class of human errors caused by distraction or short-term memory overflow in abstract deliberation.
Computer hardware can perform extended sequences of simple steps at much greater serial speeds than human abstract deliberation or even human 200Hz neurons.
Computer programs are fully configurable by the general intelligences called humans. (Evolution, the designer of humans, cannot invoke general intelligence.)

These advantages will not necessarily carry over to real AI. A real AI is not a computer program any more than a human is a cell. The relevant complexity exists at a much higher layer of organization, and it would be inappropriate to generalize stereotypical characteristics of computers to real AIs, just as it would be inappropriate to generalize the stereotypical characteristics of amoebas to modern-day humans. One might say that a real AI consumes computing power but is not a computer. This basic distinction has been confused by many cases in which the label "AI" has been applied to constructs that turn out to be only computer programs; but we should still expect the distinction to hold true of real AI, when and if achieved.

The potential cognitive advantages of minds-in-general, relative to human minds, probably include:

New sensory modalities. Human programmers, lacking a sensory modality for assembly language, are stuck with abstract reasoning plus compilers. We are not entirely helpless, even this far outside our ancestral environment - but the traditional fragility of computer programs bears witness to our awkwardness. Minds-in-general may be able to exceed human programming ability with relatively primitive general intelligence, given a sensory modality for code.
Blending-over of deliberative and automatic processes. Human wetware has very poor support for the realtime diversion of processing power from one subsystem to another. Furthermore, a computer can burn serial speed to generate parallel power but neurons cannot do the reverse. Minds-in-general may be able to carry out an uncomplicated, relatively uncreative track of deliberate thought using simplified mental processes that run at higher speeds - an idiom that blurs the line between "deliberate" and "algorithmic" cognition. Another instance of the blurring line is coopting deliberation into processes that are algorithmic in humans; for example, minds-in-general may choose to make use of top-level intelligence in forming and encoding the concept kernels of categories. Finally, a sufficiently intelligent AI might be able to incorporate de novo programmatic functions into deliberative processes - as if Gary Kasparov36 could interface his brain to a computer and write search trees to contribute to his intuitive perception of a chessboard.
Better support for introspective perception and manipulation. The comparatively poor support of the human architecture for low-level introspection is most apparent in the extreme case of modifying code; we can think thoughts about thoughts, but not thoughts about individual neurons. However, other cross-level introspections are also closed to us. We lack the ability to introspect on concept kernels, focus-of-attention allocation, sequiturs in the thought process, memory formation, skill reinforcement, et cetera; we lack the ability to introspectively notice, induce beliefs about, or take deliberate actions in these domains.
The ability to add and absorb new hardware. The human brain is instantiated with a species-typical upper limit on computing power and loses neurons as it ages. In the computer industry, computing power continually becomes exponentially cheaper, and serial speeds exponentially faster, with sufficient regularity that "Moore's Law" [Moore97] is said to govern its progress. Nor is an AI project limited to waiting for Moore's Law; an AI project that displays an important result may conceivably receive new funding which enables the project to buy a much larger clustered system (or rent a larger computing grid), perhaps allowing the AI to absorb hundreds of times as much computing power. By comparison, the 5-million-year transition from Australopithecus to Homo sapiens sapiens involved a tripling of cranial capacity relative to body size, and a further doubling of prefrontal volume relative to the expected prefrontal volume for a primate with a brain our size, for a total sixfold increase in prefrontal capacity relative to primates [Deacon90]. At 18 months per doubling, it requires 3.9 years for Moore's Law to cover this much ground. Even granted that intelligence is more software than hardware, this is still impressive.
Agglomerativity. An advanced AI is likely to be able to communicate with other AIs at much higher bandwidth than humans communicate with other humans - including sharing of thoughts, memories, and skills, in their underlying cognitive representations. An advanced AI may also choose to internally employ multithreaded thought processes to simulate different points of view. The traditional hard distinction between "groups" and "individuals" may be a special case of human cognition rather than a property of minds-in-general. It is even possible that no one project would ever choose to split up available hardware among more than one AI. Much is said about the benefits of cooperation between humans, but this is because there is a species limit on individual brainpower. We solve difficult problems using many humans because we cannot solve difficult problems using one big human. Six humans have a fair advantage relative to one human, but one human has a tremendous advantage relative to six chimpanzees.
Hardware that has different, but still powerful, advantages. Current computing systems lack good built-in support for biological neural functions such as automatic optimization, pattern completion, massive parallelism, etc. However, the bottom layer of a computer system is well-suited to operations such as reflectivity, execution traces, lossless serialization, lossless pattern transformations, very-high-precision quantitative calculations, and algorithms which involve iteration, recursion, and extended complex branching. Also in this category, but important enough to deserve its own section, is:
Massive serialism: Different 'limiting speed' for simple cognitive processes. No matter how simple or computationally inexpensive, the speed of a human cognitive process is bounded by the 200Hz limiting speed of spike trains in the underlying neurons. Modern computer chips can execute billions of sequential steps per second. Even if an AI must "burn" this serial speed to imitate parallelism, simple (routine, noncreative, nonparallel) deliberation might be carried out substantially (orders of magnitude) faster than more computationally intensive thought processes. If enough hardware is available to an AI, or if an AI is sufficiently optimized, it is possible that even the AI's full intelligence may run substantially faster than human deliberation.
Freedom from evolutionary misoptimizations. The term "misoptimization" here indicates an evolved feature that was adaptive for inclusive reproductive fitness in the ancestral environment, but which today conflicts with the goals professed by modern-day humans. If we could modify our own source code, we would eat Hershey's lettuce bars, enjoy our stays on the treadmill, and use a volume control on "boredom" at tax time.
Everything evolution just didn't think of. This catchall category is the flip side of the human advantage of "tested software" - humans aren't necessarily good software, just old software. Evolution cannot create design improvements which surmount simultaneous dependencies unless there exists an incremental path, and even then will not execute those design improvements unless that particular incremental path happens to be adaptive for other reasons. Evolution exhibits no predictive foresight and is strongly constrained by the need to preserve existing complexity. Human programmers are free to be creative.
Recursive self-enhancement. If a seed AI can improve itself, each local improvement to a design feature means that the AI is now partially the source of that feature, in partnership with the original programmers. Improvements to the AI are now improvements to the source of the feature, and may thus trigger further improvement in that feature. Similarly, where the seed AI idiom means that a cognitive talent coopts a domain competency in internal manipulations, improvements to intelligence may improve the domain competency and thereby improve the cognitive talent. From a broad perspective, a mind-in-general's self-improvements may result in a higher level of intelligence and thus an increased ability to originate new self-improvements.

3.2: Recursive self-enhancement

...etc

Edited by CSstudent, 27 November 2007 - 02:33 AM.





1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users