

Edited by treonsverdery, 01 November 2006 - 12:28 AM.
Posted 04 July 2006 - 02:12 AM
Edited by treonsverdery, 01 November 2006 - 12:28 AM.
Posted 05 July 2006 - 02:00 AM
Ben was quoted:Chronic fatigue has genetic roots
Massive data-crunch points to basis of inscrutable disease.
Researchers at the US Centers for Disease Control and Prevention (CDC) in Atlanta tackled the problem in a new way. They handed four teams of scientists a massive set of information about the symptoms and biology of CFS patients, and challenged them to pull out anything that might explain the disease.
One study showed that patients with CFS tend to have a characteristic set of changes in 12 genes that help the body respond to stress. They showed that a particular combination of gene sequences could predict whether a patient had CFS with over 75% accuracy.
"CFS is a real bodily dysfunction," says Ben Goertzel of Biomind, a biotechnology company based in Rockville, Maryland, who led one of the groups. "The idea that these people are just tired is pretty clearly refuted by this batch of results."
Posted 05 July 2006 - 05:41 AM
Posted 05 July 2006 - 08:21 AM
I was hoping that you'd engage me a bit more.
Quote: "Regarding knowledge representation, we have chosen an intermediate-level atom network representation which somewhat resembles classic semantic networks but has dynamic aspects that are more similar to neural networks. This enables a breadth of cognitive dynamics, but in a way that utilizes drastically less memory and processing than a more low-level, neural network style approach. The details of the representation have been designed for compatibility with the system’s cognitive algorithms."
http://www.novamente...file/AAAI04.pdf
Do you see a Technological Singularity happening anytime soon?
Posted 05 July 2006 - 02:14 PM
Posted 05 July 2006 - 04:43 PM
Posted 05 July 2006 - 05:37 PM
Posted 05 July 2006 - 06:47 PM
Right, and Peter Voss says it well:Typically people abstract the second part out of the equation. Why does that happen, exactly? The history of failure for these things, the perceived lack of availability of potential solutions, the technical depth required to evaluate such claims, lack of concrete information about claimed projects ...?
Posted 05 July 2006 - 06:59 PM
Posted 05 July 2006 - 08:40 PM
Posted 06 July 2006 - 01:14 AM
This last sentence hints at why we need AGI. Finding patterns in large amounts of data is what is required to understand the protein folding problem... a task more suited to an artificial scientist than a human one.Do I see TR happening any time soon? Until such time that we can simulate the level of complexity that involves the development of a cell to an organism, no I don't, particularly when at present we have difficulty predicting the folding of a single protein.
Posted 06 July 2006 - 01:25 AM
Biological life is quite fascinating. However, this should not mean that the prospect of artificial life is to be discredited only because it is in development... especially when it has the potential to solve so many problems, including aging.What we do know with a high degree of confidence is that intelligence is an emergent property. Take a genome, place it into a cell, add the right environment and you can create a Mozart or an Einstein. Now that is more extraordinary than any contemplation of a singularity.
Posted 06 July 2006 - 02:00 AM
Yet in 2005 glia (another cell type found in the brain in addition to neurons) were found to contribute to neural networks in a way that had never been understood before. There are likely further levels of regulation and encoding of information that may act on even more subtle and as yet undiscovered levels. The human brain still remains the most complex object known to man in the universe. Therefore I am very doubtful of those who propose that a "mind" may emerge based on their knowledge (or more likely lack of) of neural networks. If one is seeking to model intelligence one must understand the principles - the rules - by which it works.
Approaches and Projected Time Frames in Reaching Artificial General Intelligence (AGI)
As knowledge of the human brain increases, and the cost of computing power decreases, more scientists understand how creating powerful Artificial General Intelligence (AGI) via emulating the human brain in software is possible.
Currently, however, a substantial knowledge gap exists between our understanding of the lower-level neuronal mechanisms of the brain, and our understanding of its higher-level dynamics and cognitive functions. Creating AGI based on brain mapping must wait until quantitative improvements in brain scanning and modeling lead to revolutionary new insights into brain dynamics, filling in the knowledge gap. There is little doubt that this will happen, but it is hard to project how long it will take. Kurzweil estimates 2045[1] based on systematic extrapolation of the observed rate of improvement of brain scanning technology.
Computer science based approaches to AGI, on the other hand, provide an exciting possible shortcut. There is no need to wait for brain scanning to improve and neuroscience to undergo a revolution, leading AI theorists such as Marvin Minsky[2] agree that, with the right AGI design, contemporary computing hardware is quite likely adequate to support the implementation of AGI at the human level and beyond.
Skeptics will point out that the computer science based approach to AGI has been pursued for some time without dramatic successes. But computers have never been as powerful as they are now, and more importantly, the field has lacked adequate AGI designs, taking into account the comprehensive knowledge gained by the fields of cognitive and computer science as well as neuroscience.
If pursued properly based on a powerful AGI design, the computer science approach to AGI may be able to lead to AGI at the human level and beyond within the next decade. And these computer science based AGIs will then, among other transformative effects, drastically increase the rate of progress of science and engineering, including brain mapping and neuroscience.
1. Pg. 136 "The Singularity is Near"
2. http://www.novamente...file/AAAI06.pdf
Posted 06 July 2006 - 04:34 AM
There are likely further levels of regulation and encoding of information that may act on even more subtle and as yet undiscovered levels.
Posted 06 July 2006 - 04:59 AM
Posted 06 July 2006 - 08:54 AM
How do you known where we are in our knowledge about brains and intelligence?
However, I am not certain that your statement offers anything concrete by which to debate Bruce.
I am not going to go out on a limb ...
Posted 06 July 2006 - 09:16 AM
Is this particular thread to dissolve into a dismissal of AGI as too hard, with no bearing on Life Extension?
Posted 06 July 2006 - 04:47 PM
Posted 06 July 2006 - 04:56 PM
For others it is a yearning which is so deep seated that I am beginning to think of it as a transhumanist replacement for religion.
Posted 06 July 2006 - 07:53 PM
Makes me wonder if there's any way an aged upload can continue dying from Alzheimer's because nobody knows how to cure it even in silico... Probably they'd just freeze him until they figure it out somehow -- Argh, we need that world, it would make things so easy :-)Why must I understand what I am copying?
Posted 07 July 2006 - 12:09 AM
Posted 07 July 2006 - 01:02 AM
I'm not really interested in convincing everybody. I'm not even interesting in convincing our resident critics that it's possible.Hopefully, as MA suggests, we needn't try and convince everyone in order to make it a reality.
Edited by jaydfox, 07 July 2006 - 01:25 AM.
Posted 07 July 2006 - 01:21 AM
I'm not really interested in convincing everybody. I'm not even interesting in convincing our resident critics that it's possible.Hopefully, as MA suggests, we needn't try and convince everyone in order to make it a reality.
I'm only interested in making it clear that it's not "obviously" impossible, and should remain something ImmInst advocates, because it is relevant, even if we disagree on how important it will be in the short-, mid-, and long-term.
And when I say it should remain something ImmInst advocates, I don't just mean advocacy by individual members, but as something the institute itself supports, the way we support life extension via biotech (e.g., SENS). Whether that's through a declaration of principle, or space in our next book, or a panel at a future conference, etc., it should be something we support officially. If our resident critics have a problem with it, then debate it in the fora, but don't try to have it removed from the institute's official focus.
Hmm, Harold, I think I you an apology. In the chaos of ideas being tossed around, dropped, morphed, etc., and the incendiary exchanges here and there, I at some point seem to recall that you shot down Bruce's attempt to have AGI and other technologies given a similar limelight to biotech.
Skimming back, I see now that, quite the opposite, you even advocated DoPs for cryogenics (cryonics), transhumanism, and the singularity.
Now I'm confused. Until I figure out where I got so far off base, please accept my apology.
Posted 07 July 2006 - 01:21 AM
This is a pretty useless statement.I certainly do not believe that software can be made sufficiently complex for consciousness to emerge.
First of all, I would argue that our brain is a piece of hardware running some software, primarily- our "consciousness".
I use the phrase "sufficiently complex" in the context of emergent properties (http://en.wikipedia.org/wiki/Emergent). In my view, a system must be sufficiently complex for consciousness to emerge.Second of all, why the arbitrary assertion about software being "sufficiently complex"? Are you saying there is some functional complexity (that does something necessary for consciousness) that cannot be mirrored as a piece of software? From atoms to cells to bacteria to bears we can map physical function with physical form, with no evidence of any extra-physical anomaly ("form fits function" is the first thing I ever learned about biology). Why would you assume that there is a physical anomaly in humans (that somehow makes us "extra dimensional" or "unable to be represented as complex software"), when this hasn't been observed anywhere else?
I don't think so. Let's look at what Michael said. Firstly he referred to research headed by Prof. Theodore Berger which is seeking to model the function of the hippocampus. What the researchers did was measure the electrophysiological signals going in and coming out of rat hippocampal slices, analysed them via DSP methods and developed a set of non-linear equations which can model some of that signaling. They believe they can have reasonable success at this because the hippocampus is the most uniformly ordered neurally networked region of the brain. This was widely reported in 2003 on account that it foretold the coming of a prosthetic hippocampus. Since then there has been very no progress -- or at least no known reports -- in going from hippocampal slices to in vivo studies. I find the work very interesting but it can be very easily shot down if its going to be used as support for the notion of an AGI. Then Michael went back to the argument which seeks to compare the computational power of the brain with that of modern computers. I would like to know how Michael is determining the computational power of the human brain when, as I have already mentioned, we are still uncovering its modes of processing.I think Michael sufficiently dismissed this statement (or showed you how to do so). I do think that AGI will come from actually understanding how intelligence works rather than simulation, computational evolution, etc, though anything is certainly possible.
Posted 07 July 2006 - 01:22 AM
Posted 07 July 2006 - 01:31 AM
AGI is in the same spot powered flight was about 120 yrs ago. Ask most anyone then, scientists especially, and they would have said it was a far-off nearly impossible dream... close to a religiously inspired dream at that, seeming that only gods could fly. Hopefully, as MA suggests, we needn't try and convince everyone in order to make it a reality.
Posted 07 July 2006 - 01:35 AM
AGI doesn't have to be "conscious", in whatever mystical or functional way we might be talking about (since I'm sure we all mean different things by "conscious"), to be intelligent enough to self-improve and scale with increased computing power to something far more intelligent than humans. Consciousness is a sufficient, but not necessarily necessary, condition, for human-level intelligence in the domains required to spark a singularity.1. We have no practical definition of consiousness, therefore we cannot model it.
Posted 07 July 2006 - 01:35 AM
If our resident critics have a problem with it, then debate it in the fora, but don't try to have it removed from the institute's official focus.
0 members, 1 guests, 0 anonymous users