• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Systems Biology


  • Please log in to reply
24 replies to this topic

#1 kevin

  • Member, Guardian
  • 2,779 posts
  • 822

Posted 07 May 2005 - 04:35 AM


Link: http://www.guardian....1477568,00.html


Posted Image
The unselfish gene
The new biology is reasserting the primacy of the whole organism - the individual - over the behaviour of isolated genes

Johnjoe McFadden
Friday May 6, 2005
The Guardian


What is a gene? Scientists eager to uncover genes for heart disease, autism, schizophrenia, homosexuality, criminality or even genius are finding that their quarry is far more nebulous than they imagined. Uncovering the true nature of genes has turned biology on its head and is in danger of undermining the whole gene-hunting enterprise.
The first clues turned up in study of the cell's metabolic pathways. These pathways are like Britain's road networks that bring in raw materials (food) and transport them to factories (enzymes) where the useful components (molecules) are assembled into shiny new products (more cells). A key concept was the "rate-limiting step", a metabolic road under strict traffic control that was thought to orchestrate the dynamics of the entire network.

Biotechnologists try to engineer cells to make products but their efforts are often hindered, apparently by the tendency of the key genes controlling the rate-limiting steps to reassert their own agenda. Scientists fought back by genetically engineering these genes to prevent them taking control. When they inserted the engineered genes back into the cells they expected to see an increase in yields of their products. But they were disappointed. The metabolic pathways slipped back into making more cells, rather than more products.
Geneticists were similarly puzzled by an abundance of genes with no apparent function. Take the "prion gene". This is the normal gene that in mad cow disease is transformed into the pathogenic brain-destroying protein. But what does it normally do? The standard way to investigate what a gene does is to inactivate it and see what happens. But geneticists who inactivated the mouse's prion gene found that the mutant mice were perfectly normal. The prion gene, like many other genes, seems to lack a function.

But a gene without function isn't really a gene at all. By definition, a "gene" has to make a difference; otherwise it is invisible to natural selection. Genes are those units of heredity that wrinkled Mendel's peas and are responsible for making your eyes blue, green or brown. A century of reductionist biology has tracked them down, through Watson and Crick's double helix, to the billions of A, T, G and C gene letters that were spewed out of the DNA sequencers. But now it seems that the genes, at the level of DNA, are not the same as genes at the level of function.

The answer to these riddles is being unravelled in an entirely new way of doing biology: systems biology. Let's return to that road network. We may identify a particular road, say the A45, that takes goods from Birmingham to Coventry, and call it the BtoC road, or BtoC gene. Blocking the A45 might be expected to prevent goods from Birmingham reaching Coventry. But of course it doesn't. because there are lots of other ways for the goods to get through. In truth the "road" (or gene) from BtoC isn't just the A45 but includes all those other routes.

Rather than having a single major function, most genes, like roads, probably play a small part in lots of tasks within the cell. By dissecting biology into its genetic atoms, reductionism failed to account for these multitasking genes. So the starting point for systems biologists isn't the gene but rather a mathematical model of the entire cell. Instead of focusing on key control points, systems biologists look at the system properties of the entire network. In this new vision of biology, genes aren't discrete nuggets of genetic information but more diffuse entities whose functional reality may be spread across hundreds of interacting DNA segments.

This radical new gene concept has major implications for the gene hunters. Despite decades of research few genes have been found that play anything more than a minor role in complex traits like heart disease, autism, schizophrenia or intelligence. The reason may be that such genes simply don't exist. Rather than being "caused" by single genes these traits may represent a network perturbation generated by small, almost imperceptible, changes in lots of genes.

And what about "selfish genes", the concept introduced by the Oxford biologist Richard Dawkins to describe how some genes promote their own proliferation, even at the expense of the host organism? The concept has been hugely influential but has tended to promote a reductionist gene-centric view of biology. This viewpoint has been fiercely criticised by many biologists, such as the late Stephen Jay Gould, who argued that the unit of biology is the individual not her genes. Systems biology is reasserting the primacy of the whole organism - the system - rather than the selfish behaviour of any of its components.

Systems biology courses are infiltrating curricula in campuses across the globe and systems biology centres are popping up in cities from London to Seattle. The British biological research funding body, the BBSRC, has just announced the creation of three systems biology centres in the UK. These centres are very different from traditional biology departments as they tend to be staffed by physicists, mathematicians and engi neers, alongside biologists. Rather like the systems they study, systems biology centres are designed to promote interactivity and networking.

And of course, outside of biology, there will be many who will be saying, "I told you so". Holistic approaches have always dominated the humanities and social sciences. The first eight chapters of Salman Rushdie's Midnight's Children describes the lives of the narrator's grandparents, parents, aunts, uncles and friends against the backdrop of the tumultuous politics of 20th-century India and Pakistan. The reason, according to the narrator, is that "to understand just one life, you have to swallow the world". Perhaps biologists ought to have read more.

Johnjoe McFadden is professor of molecular genetics at the University of Surrey and author of Quantum Evolution

j.mcfadden@surrey.ac.uk

#2 kevin

  • Topic Starter
  • Member, Guardian
  • 2,779 posts
  • 822

Posted 10 August 2005 - 02:39 AM

Link: http://www.the-scien...m/2005/8/1/30/1

Thankfully we don't have to rely on our brains anymore...



T-Cell Signaling Pathways Decoded In Silico
Bayesian network inference culls causal relationships from large biological data sets
By Sarah Rothman
© 2005 AAAS
Posted Image
Schematic of Bayesian network inference using multidimensional flow cytometry data. Bayesian network analysis of the data from flow cytometry of 11 phosphoproteins and phospholipids in individual cells extracts an influence diagram reflecting the causal relationship between signaling network molecules.

Researchers have spent decades determining how proteins interact with each other in complex signaling networks by studying these relationships one at a time in isolation. This approach may have been necessary, but the resulting maps are accordingly suspect. "The average signaling map is a composite of data from everything from nematodes to yeast ... and although we draw an arrow between two molecules, whether it works in any given cell type and the strength, and timing and conditions are very contextual," says Stanford University researcher Garry Nolan.

Using a largely computational approach, Nolan and colleagues at the Massachusetts Institute of Technology quickly and successfully reverse-engineered a T-cell signaling map that took biochemists decades to describe.1 [:o] Their study combined a powerful statistical technology, called Bayesian network inference, with a large biological dataset obtained from flow cytometry. "We can now infer these map links in a more holistic sense. That is, rather than a linkage between proteins one by one at a time, we can take huge bites out of the networks, maybe up to 20 proteins at a time," Nolan says.

Bayesian networks are graphical models in which variables are represented as nodes, and arrows between those nodes represent ways in which the variables are dependent on each other. The system generates a variety of models and uses data to statistically determine the most probable causal relationships between nodes.

In Nolan's study, researchers first used flow cytometry to measure simultaneously the levels of phospholipids and phosphoproteins in individual cells following several different perturbation conditions. Then, using this large dataset, they applied Bayesian network inference to recreate the structure of a signaling network connecting phosphorylated proteins in T-cell signaling through causal relationships. This result was compared to a widely accepted T-cell signaling map to search for matching and missing causal arcs. The study not only regenerated connections widely accepted in the literature, but also discovered and verified new connections between molecules.

PERTURBATION INPUT DATA
Nolan says the study's success was due to the nature of the dataset. Flow cytometry enabled simultaneous measurement of multiple protein levels in individual cells, eliminating noise via population averaging and providing a much larger dataset than could be obtained from traditional techniques such as Western blotting. In addition, perturbations to the system generated data indicating that changes in the levels of one protein affect other proteins in the system. This data allowed the statistical system to add directionality into the arrows it placed between molecules.

Using truncated datasets the researchers showed not only that the Bayesian analysis thrives on sheer data quantity, but also that the perturbation data was key to predicting causality. Relationships derived from Bayesian analysis of datasets that lacked the perturbation conditions lacked accuracy in predicting the directionality of associations between molecules. Nolan likens the system to a delicate spider's web: "To understand a spider web you have to step back from it and pull on the individual strands and measure the effects at a distance to understand the relationships between individual components."

The networks in this study could not predict cyclic relationships, such as those obtained in the many signaling systems that rely on feedback loops. Larry Lok, of the Molecular Sciences Institute in Berkeley, Calif., explains that this is due to the theoretical framework on which it is based. "Causality is a kind of one-directional thing, and a thing can't normally cause itself." [lol] [self-referencing systems are next I expect.. ]

But Lok was pleased with the mathematical application. "It's very satisfying when a new mathematical analysis yields results that look reasonable and reassuring, but also uncover a few new and unexpected things as well." Lok says he believes the system might be adapted to take time into account. "You can expand the data in time and still apply a similar Bayesian analysis, called dynamic Bayesian networks. Computationally it's more intense; whether it's practical or not remains to be seen."

Nir Friedman, a computer science professor at Hebrew University in Jerusalem, agrees that the temporal aspect "would be nice, but we can still learn a lot without it." He adds that the study has promise for determining how signaling systems in human cells differ in diseased and normal states.

References

1. K Sachs et al, "Causal protein-signaling networks derived from multiparameter single-cell data," Science 308: 523-9. [Publisher Full Text] April 22, 2005

sponsored ad

  • Advert

#3 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 10 August 2005 - 04:27 AM

"The unselfish gene"

This is why I don't read popular science...

Rather than having a single major function, most genes, like roads, probably play a small part in lots of tasks within the cell. By dissecting biology into its genetic atoms, reductionism failed to account for these multitasking genes.


This was known when the selfish gene concept was introduced, and I don't see how it conflicts. Of course idividual genes play a small part.

And what about "selfish genes"... Systems biology is reasserting the primacy of the whole organism - the system - rather than the selfish behaviour of any of its components.


Such melodrama and sensationalism... why does the concept of selfish gene have to be at odds with networks or systems biology? The article doesn't have much to say on that... I've never understood the marketability of sensationalist science writing...

#4 wraith

  • Guest
  • 182 posts
  • 0

Posted 10 August 2005 - 05:36 AM

why does the concept of selfish gene have to be at odds with networks or systems biology? 


Might be the unit of selection debate; Dawkins argues that the unit of selection is the gene.
(My opinion - not that it counts for much - is that Dawkins is wrong.)

~~~

Thanks for the articles, Kevin.

#5 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 10 August 2005 - 07:09 AM

I don't think Dawkins said anything falsifiable about selfish genes... It's just a matter of interpretation, not of truth.

#6 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 10 August 2005 - 07:22 AM

Dawkins argues that the unit of selection is the gene


That is an oversimplification. I haven't actually read his book.

I just find it irritating the way the popular media faliciously polarizes everything into two opposing and incompatible sides.

#7 wraith

  • Guest
  • 182 posts
  • 0

Posted 10 August 2005 - 12:42 PM

An oversimplification?
Is it? I've read some Dawkins (The Selfish Gene, The Blind Watchmaker, and a few essays); I've also read books by others on the topic.
One I'd recommend is The Nature of Selection (Elliot Sober, ed.).

#8

  • Lurker
  • 1

Posted 10 August 2005 - 01:09 PM

Wraith: you say that the gene being the unit of selection is wrong. That's interesting. What do you mean?

#9 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 10 August 2005 - 02:46 PM

A gene's "imperative" is to perpetuate itself forward in the gene pool. If this weren't its imperative then it would not exist in the first place. This truism was all Dawkins had in mind when he coined the term "selfish gene".

Perpetuation can be accomplished either through incorporation into a collective genome, or through other "parasitic" or antagonistic strategies. Genes that perpetuate themselves by being part of a collective ARE being selfish by being exceedingly cooperative. Reminds me of the popularly coined phrase "benselfishness".

"We must all hang together, or assuredly we shall all hang separately." -- Ben Franklin (Sums up our relationship here at ImmInst quite well, wouldn't you say?)

Gene selectionists, organismal selectionists, group selections...when push comes to shove they all converge on the same hierarchical model (albeit cloaked in their own very unique verbiage ;)) ).

Osiris isolated the real fallacy in the article:

Rather than having a single major function, most genes, like roads, probably play a small part in lots of tasks within the cell. By dissecting biology into its genetic atoms, reductionism failed to account for these multitasking genes.


One of the most effective ways for a gene to be "selfish" is to multitask. What better way for it to become indespensible and permanently embedded in the collective genome!

#10 wraith

  • Guest
  • 182 posts
  • 0

Posted 10 August 2005 - 04:06 PM

And oh yeah... Dawkins' The Extended Phenotype.

~~~

Prometheus,

Suppose you have an allele that - I dunno - makes your fur blue instead of red. But suppose there is another allele at another locus that modifies the red allele such that it expresses as blue. Now suppose some critters by chance get some blue paint dumped on them. And suppose for some reason blue is now the in color. Does it matter if a critter has the blue allele at the first locus, the modifier allele at the second locus, or the good fortune to have had been painted blue? Not a bit. What matters is the individual's phenotype, since it is the individual is subject to loss (or gain) of fitness in a particular generation. Now you can say that being painted blue is not a heritable trait and won't make a difference over the long haul - but that's not what I'm talking about. All I care about for naming the unit of selection is what's happening in one generation. Like John said, it might just be a matter of interpretation (or perspective, if you wish). I guess by saying Dawkins is 'wrong' I mean that I don't find his approach to have overwhelming heuristic value. But he did sell a lot of books and he has way more money than me, so WTF do I know?

By the way, I like your screen name. Have you read of the Raven-steals-the-light story in Haida mythology?

~~~

Okay Don - but what if - the multitasking gene adds to fitness for some traits but takes away for some others, depending upon the environment? You'd have to measure the total effect on the on the organism as a whole.

#11 Mark Hamalainen

  • Guest
  • 564 posts
  • 0
  • Location:San Francisco Bay Area
  • NO

Posted 10 August 2005 - 06:05 PM

Okay Don - but what if - the multitasking gene adds to fitness for some traits but takes away for some others, depending upon the environment? You'd have to measure the total effect on the on the organism as a whole.


Of course a single gene on a plasmid in a test tube won't be very powerful in this context, it is dependent on interactions with the rest of the organism. The point is that a gene behaving in a selfish way is not mutually exculsive with genes interacting and complex ways.

In terms of claiming that the gene is the unit of heredity, well this is a matter of definitions. I was thinking about the typical molecular biologist usage of the term, i.e. a sequence of bases with promoter, exons, introns, and a stop signal. There are of course many variations on this theme, and there are other functions and types of DNA sequences that play a role in heredity. But of course if you define a gene as the unit of heredity... then it becomes an abstract term which will refer to whatever our best understanding of what the unit of heredity is.

#12 DJS

  • Guest
  • 5,798 posts
  • 11
  • Location:Taipei
  • NO

Posted 10 August 2005 - 06:27 PM

wraith

And oh yeah... Dawkins' The Extended Phenotype.


Both Dawkins and Gould are excellent reads. I still haven't gone out yet and gotten Dawkins' latest release...looks really good.

All I care about for naming the unit of selection is what's happening in one generation.


If you define selection using these parameters then, naturally, you would favor organismal selection. There is no denying that the individual organism is an object upon which selection pressure is exerted. However, does this make the organism the unit of natural selection?

Like John said, it might just be a matter of interpretation (or perspective, if you wish).


No doubt!

I guess by saying Dawkins is 'wrong' I mean that I don't find his approach to have overwhelming heuristic value. 


Remember the Hardy-Weinberg Equilibrium? Evolution is defined as *change in allele frequency over time*. That's a pretty strong heuristic if you ask me. ;) (By the way, I am by no means a "gene centrist" even though I may appear to be towing the line right now)

Okay Don - but what if - the multitasking gene adds to fitness for some traits but takes away for some others, depending upon the environment? You'd have to measure the total effect on the on the organism as a whole.


Yes, naturally a phenotype is selected for in toto, but again there is a difference between being an object of selection and a fundamental unit. An individual organism is the phenotypic expression of a fleeting amalgamation of genes. Genes, on the other hands, are the closest biology gets to being truly immortal.

** In tems of trade offs in relative fitness, the one that immediately comes to mind from aging theory is that of antagonistic pleiotropy .

#13 wraith

  • Guest
  • 182 posts
  • 0

Posted 10 August 2005 - 08:20 PM

Remember the Hardy-Weinberg Equilibrium?  Evolution is defined as *change in allele frequency over time*.  That's a pretty strong heuristic if you ask me.  ;)  (By the way, I am by no means a "gene centrist" even though I may appear to be towing the line right now)


Yup; and drift can cause gene freq changes from one generation to the next, just as selection can.

Don't worry, I'm not going to pigeon-hole you as a gene centrist, and I'm hoping you don't peg me for a hair-splitting anal retentive whatever.

My last comment before I return to my regular life is that I think the type of analysis presented in the second article that Kevin posted is really, really cool. I look forward to reading more about tthis type of approach in future.
18 years ago (!) I briefly dated a guy who was working on his PhD in theoretical Bayesian stats. Man he was weird. But smart. I wonder what happened to him...

#14 kevin

  • Topic Starter
  • Member, Guardian
  • 2,779 posts
  • 822

Posted 11 August 2005 - 12:00 AM

Natural selection operates on the regulation of the relationships between entities which encourage the reproduction of the pattern formed by all of the entities together in a particular environment, not the enties themselves. This may seem like semantics but it really is an important difference to consider when thinking about the 'unit' of natural selection because a regulatory network cannot be considered a 'unit'.

#15 manofsan

  • Guest
  • 1,223 posts
  • 56

Posted 11 August 2005 - 12:13 AM

Heh, I was just reading what you wrote, and also looking at this latest announcement on the publishing of the rice genome:

http://www.physorg.com/news5742.html

It reminds me of how new Linux disitributions ("distros") are announced. Haha, can you imagine one day that we'll be announcing new "human genome distros" for your own personal designer genome. Hehe, you'll be able to download the latest upgrade patch containing the latest signalling pathway upgrades.

#16

  • Lurker
  • 1

Posted 11 August 2005 - 02:00 AM

My take on Dawkins' "Selfish Gene" concept is that evolution has placed the highest priority on the perpetuation of information encoded in the genome - as it must - in comparison to the information encoded in the pattern of cytodevelopment. For organisms such as ourselves with the level of specialised CNS development we achieve (and its consequence of consiousness) "selfishness" takes whole a new meaning since various aspects of the cytodevelopmental pattern in cerebral tissue are not merely encoded by the genome but are also related to interaction of the CNS with the environment and such information is always destroyed with the death of the organism. This is an astonishing waste until such time as we can preserve and perpetuate such information.

#17 manofsan

  • Guest
  • 1,223 posts
  • 56

Posted 11 August 2005 - 02:37 AM

prometheus, quantum holography is probably the only information storage mechanism with sufficient density to capture all the nuances of the human mind.

#18 kevin

  • Topic Starter
  • Member, Guardian
  • 2,779 posts
  • 822

Posted 11 August 2005 - 06:59 PM

Link: http://www.the-scien...m/2005/8/1/20/1


The Next Frontier in Cellular Networking

Systems biology doesn't have to be complicated

Posted Image

The first statement that usually accompanies a talk or journal article on cellular biology is usually something like, "Life is complicated." The apparent complexity of the networks we find in living cells is sometimes quite overwhelming. Historically, many of us have shied away from tackling system-level questions, being content instead to productively study individual proteins or genes. The new mantra, systems biology, is proclaiming a change in attitude, to convince us that this overwhelming complexity is, in fact, tractable to human understanding. As a card-carrying, bonafide systems biologist, I would have to agree.

THE CELL COMPUTED

Currently, there is a community quite at home in dealing with huge complexity: modern day microchip designers. Given the statistics on modern chip design, one wonders if, in fact, cellular complexity has been surpassed. For example, with the recent move to 90-nm fabrication technology, the average transistor is now less that 50 nm in diameter – only five times bigger than the average intracellular protein. Not only are the parts getting smaller, the number of parts fabricated onto a single die is quite astounding. For example, the AMD Athlon 64 has about 106 million transistors.

Given that a single kinase/phosphatase cycle has a dynamic response similar to a transistor, with approximately 518 kinases known to be expressed in humans, we are left with the embarrassing notion that a human cell's computational capacity is significantly less than even the very first microprocessor – the Intel 4004, which had just over 2,000 transistors. This comparison is perhaps unfair, since it assumes that cellular signaling pathways "compute" digitally like human-made microprocessors. Signaling pathways more likely operate like an analog computer. Most external signals are themselves analog, and protein kinetics are eminently suitable for analog computation.1 Assuming that a single kinase/phosphatase unit behaves as a modest analog element such as an operational amplifier, it puts human protein networks somewhere around an Intel 8086 microprocessor in terms of complexity. That's still not particularly high.

Even if we take into account the added complexity of gene networks, gene splicing, and the great variety of covalent states, we might still only be able to increase the complexity a little more than ten-fold – comparable to, say, a 486 processor. Ok, perhaps these numbers are meaningless, but it makes one think for a moment that cells may not be as functionally complicated as it seems, given the relatively small number of components.

MOVING BEYOND A PARTS LIST

The real problem facing systems biology is that we don't know the computational roles that most cellular networks play. In many cases we know the parts, and we often even know the connections, but we have very little idea as to what the networks actually do. To take a simple example, glycolysis, presumably the most well understood and historically the first pathway discovered more than 70 years ago, is still poorly understood in terms of its functional role. Other than some vague notions of maintaining homeostasis and supplying ATP, we still don't really understand why glycolysis has so many feedback and feed-forward loops.

What's surprising is that some researchers don't recognize the gap in our understanding. When I ask my colleagues why metabolism is not of great interest anymore, they say, "Because we understand metabolism. There is no more to discover." What they really mean is that all the parts and connections, sometimes in extreme detail, are known. But Humpty-Dumpty is still in pieces and no one has yet bothered to put him back together again.

As biologists, we have a long and illustrious tradition of collecting, be it butterflies or genes. Systems biology is trying to take us down a new route, one that requires a different mode of thinking. But many researchers do not fully understand the nature of the change. Some ignore it completely and consider it a passing fad. Others cope with the change by treating systems biology as an extension of traditional biology, equating it with the collection of vast amounts of high-throughput data. Some researchers focus on network inference as a defining attribute, while others equate systems biology to predictive biology that is closely tied with modeling. This last interpretation is closer to the spirit of systems biology, but modeling, though important, is still only a means to an end.

Posted Image
Herbert M. Sauro

While systems biology may make use of high-throughput data, network inference, and modeling at some stage, these activities are not the defining attributes. Instead, systems biology is about understanding and rationalizing the operation of a biological system. Systems biologists are not content with just a list of parts, terabytes of high-throughput data, or even a computational model. Systems biology is about looking at the MAPK pathway or the p53/Mdm2 couple and trying to rationalize the structure and kinetics of the network in terms of function. These studies may be on small systems or large; size is not important. However, given the confusion on the interpretation of the term systems biology, a more appropriate phrase for rationalizing cellular networks might be "molecular physiology."

THE PATH AHEAD

Biology is the product of evolution and not of an intelligent mind. This makes questions of function difficult for many biologists to address. Unlike engineers who have a design to work from, as biologists we often have very little idea of what evolution had "in mind." An engineer can open up an AM radio and locate the different modules – the amplifier, the frequency detector, the demodulator, etc. – and thereby rationalize what seems, to the untrained eye, a jumble of components. To be able to do the same trick on a complex signaling network involved in cancer would be a tremendous achievement both intellectually and for facilitating our search for a cure.

So how does one proceed? My own group at the Keck Graduate Institute focuses on the understanding of large networks by functional deconstruction using both bottom-up and top-down approaches.2 We have assembled a large library of network motifs through a combination of manual design and in silico evolution. Our networks can carry out all manner of processing tasks, ranging from simple mathematical operations, frequency filtering, integrators, and differentiators, to the more traditional oscillators and switches. Such a library allows us to inspect a new network and begin dividing it into functional domains. In this way we can begin to rationalize the structure and functional role of the different sections.

The end result of a successful project is a functional understanding of the problem and a reliable predictive model. Most important is the coexistence of experimentation, theory, and modeling in one package. Modeling and experimentation, in particular, must go hand in hand. Do not fall into the trap of putting the modeling effort at the end of a project; this is the worst possible scenario. If anything, put the modeling effort up front, because when you do the modeling you will know what needs to be measured, what hypotheses need to be tested, etc. Modeling should guide you and uncover gaps in knowledge.

At the end of the project you will hopefully have a reasonable quantitative model of the biology (validating that model is another story, of course). You will likely find, however, that the model is so complicated that it's not easy to understand. This is where theory helps. Two essentially equivalent theories, metabolic control analysis (MCA) and the biochemical systems theory (BST) are both excellent starting points.3

These theories describe how perturbations propagate through reaction networks. Most important, they force us to think about what happens to concentration levels and reaction rates during a perturbation, something that many of us never consider (which is quite odd considering that one of the defining attributes of life is change). In addition to these biological theories, an appreciation of engineering, particularly electrical engineering is also indispensable. If there is any group in our community that understands signal processing more than most, it is, in my experience, the electrical engineers. Recent work by Brian Ingalls4 and more recently Chris Rao has shown that a deep connection exists between classical control theory and MCA/BST. This is good news because we can leverage that mountain of engineering theory to our cause.

If possible, theory and reverse engineering should be run alongside modeling and experimentation. Rationalization of the pathway will often generate hypotheses that one would have never considered. Finally, do not be seduced by the promise of high-throughput data. Unfortunately, much high-throughput data are inappropriate for quantitative systems biology, though it is hoped that will change in the future. Hypothesis-driven, targeted, experimental measurements are better, time-honored approaches.

Currently, the people involved in the new and exciting area of synthetic biology seem to be taking the intellectual lead.5 They take an engineering approach, building quantitative models and testing them experimentally under controlled conditions. If I could, I would bring synthetic and systems biology together, using the approach from the former to solve problems in the latter.

Herbert M. Sauro is a group leader in computational and systems biology at the Keck Graduate Institute. He thanks Anastasia Deckard and Vijay Chickarmane for reading the manuscript and providing helpful discussions.

He will be contacted at Herbert_Sauro@kgi.edu.

References
1. HM Sauro, BN Kholodenko "Quantitative analysis of signaling networks," Prog Biophys Mol Biol 2004, 86: 5-43. [PubMed Abstract][Publisher Full Text
2. CA Voigt et al, "The Bacillus subtilis sin operon: an evolvable network motif," Genetics 2005, 169: 1187-202. [PubMed Abstract][Publisher Full Text
3. DA Fell Understanding the control of metabolism London: Portland Pressa 1996. 
4. BP Ingalls "A frequency domain approach to sensitivity analysis of biochemical systems," J Phys Chem B 2004, 108: 1143-52. [Publisher Full Text]  
5. S Hooshangi et al, "Ultrasensitivity and noise propagation in a synthetic transcriptional cascade," Proc Natl Acad Sci 2005, 102: 3581-6. [PubMed Abstract][Publisher Full Text][PubMed Central Full Text


#19 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 12 August 2005 - 01:44 AM

Math systems thinking is wonderful Making particular models of each known phenomena I'm reminded of using math equation fitting (think curve fitting) to make each pylon of a bridge, (visualize suspension bridge) a bridge just goes from present place to preferred destination. The more Pylons, not used as a bridge the system is percievable as a 2d mesh. Various kinds of utility come from the terrain of that mesh with steepness, plateaus n gulches. What humans like as a strongly correlative gene group -> chemistry thing is perceptible as places on a landscape with high utility.

To build up a mesh of math pylon models people use their minds as well as create new ideas automatically with computers. These are pictures of automata. Each automata is like a model or pylon of a mesh. As each automata grows from a few rules n a seed value they have moments, or times, of coherent output. If those times of coherent output are synchronus between different pylons an effect, windowed with a duration, is generated. The nifty part is that if you create a descriptive language to automatically generate pylons from measured organism data then as more n more data build up the model becomes more predictive. Predictive but different than deductive.

That is a wordy way to say, if you want your garden to be blue, if you have sufficient items like bluegrass, hydrangeas, a blue car, then right before you leave to go to work, during spring, when car, grass, n bush are all being blue then you have a blue garden.

You guys know all that anyway From a longevity engineering perspective systems approaches create an opportunity to create new nonobserved pylons that favor duration+window->effect valued pathways.

embedding the human genome with Feedback systems like automata that have a temporal form creates opportunities to have simultaneous positive regeneration like the recurring blue lawn

Edited by treonsverdery, 19 October 2006 - 03:45 AM.


#20 treonsverdery

  • Guest
  • 1,312 posts
  • 161
  • Location:where I am at

Posted 12 August 2005 - 01:58 AM

um math models are dots. value comes from connecting the dots. making new dots is even more valuable. proving that certain kinds of dots are present or not is powerful Much discussion is people doing impressionistic pointillism. If there is ultimate power it might be associated with "can dots be changed" The map is not the territory people then say, "a dot is not a mote" Then religionists suggest a "dote ing" entity beyond "mots". Then my mind suggests that I do things with value. like the actual researchers that build pylons or go Mendeleeving or deGraying. perhaps the bluntest useful thing I'd say is that cultivating the vast potential of longevity science with China's technically capable population is the right thing to do.

Edited by treonsverdery, 19 October 2006 - 03:46 AM.


#21 kevin

  • Topic Starter
  • Member, Guardian
  • 2,779 posts
  • 822

Posted 23 September 2005 - 08:35 PM

Link: http://www.genomeweb...e=2005923112928

I think I'll be buying this one..



Lee Hood Writing Systems Biology Textbook for Undergraduates
By Kirell Lakhman, editorial director, GenomeWeb News
NEW YORK, Sept. 23 (GenomeWeb News) - Lee Hood, president of the Institute for Systems Biology, is writing a textbook about systems biology for undergraduate students, he said at a conference earlier this month.

"Systems biology should start [at the undergraduate level] and there's nothing out there that even remotely covered the field," Hood told GenomeWeb News following his keynote address at the 12th European Congress on Biotechnology, held at the University of Copenhagen, Denmark, earlier this month.

The aim of the text, which is expected to be available by the end of the month, is to educate undergraduate students about "biology as an information science and the emergence of systems biology so they can think in ... conceptual terms and have the framework for ... learning about any kind of system."

Hood said he decided to write the book because the educational system in the United States isn't doing enough to "integrate" disparate scientific disciplines to help students understand the way they interact in an organism --- a common definition of systems biology.

"I think we do need to change the way we train" university students studying "fundamental sciences," Hood said. "Where education really fails [is] in fundamental sciences ... [because] there was an enormous concentration too early in their careers on details and not enough articulation on fundamental principles."

For example, he said he believes that students studying biology ought to pursue a dual major with, say, engineering, applied physics, or computer science. "I think the expectations in the US at the undergraduate level have been way too low," he said.

"We taught biology in the past too much as a scripted science ... and we're now in the position to teach it as a conceptual science," he added.

The book, "Biological Information and the Emergence of Systems Biology," by Roberts and Company Publishers, is co-authored with David Galas, Greg Dewey, John Wilson, and Ruth Veres.

#22 manofsan

  • Guest
  • 1,223 posts
  • 56

Posted 25 September 2005 - 04:09 AM

How do you feel about Wiki-textbooks?

http://en.wikibooks.org/wiki/Main_Page

If Systems Biology is the key to defeating aging, then why not make it Open Source?

Then the body of knowledge could be built up quickly and disseminated widely. As you know, the exhorbitant rates charged for scientific papers has been causing concern amongst academics, and represents an impediment to full academic freedom. Likewise textbook costs are likewise a signficant challenge for any would-be student.

What are your feelings on creating an open-source wiki-textbook for Systems Biology? What do you all say? Can I presume we're all familiar with the Wiki concept? Perhaps Prof Hood could be persuaded to adopt a Creative Commons license approach? Comments?

#23 kevin

  • Topic Starter
  • Member, Guardian
  • 2,779 posts
  • 822

Posted 14 April 2006 - 05:27 AM

http://www.sciencema...ll/312/5771/189


COMPUTER SCIENCE:
Life in Silico: A Different Kind of Intelligent Design

Kim Krieger*

Engineers and computer scientists are trying to establish a standard tool kit for an emerging field of biology
A biologist sits in front of her computer screen, staring at a model of a bacterium, pondering which metabolic pathway it would use if it were buried deep in the ice of Jupiter's moon Europa. She goes online and searches a database. Within seconds, she finds what she's looking for: Models of three metabolic pathways have been designed and archived by other biologists from past projects. She downloads them, plugs them into her bacterium, programs in the icy environmental conditions, and starts the model running to see which works best.

Futuristic? Yes. But a growing cadre of systems biologists and computer scientists believes it's possible. Just as engineers use the program AutoCad to create structures in virtual space, if an enterprising team of researchers has its way, biologists will have their own software to design and assemble models of organisms and their components. One group has already ventured into this exotic territory--a computer group at Harvard University led by mathematician Jeremy Gunawardena. They soon plan to unveil what they consider the first truly modular program for bio design, called "Little b."

The name is a play on a powerful computer programming language called C. (Gunawardena's group had planned to call their software "B," for biology, but realized the name was already taken by C's predecessor.) Gunawardena outlined the program and some of its uses in talks last October at the Council for the Advancement of Science Writing meeting. One of his goals is to bring more consistency to the young field of computer simulation of biological systems. Although biological modeling has shown early promise in pharmaceutical research and the study of complex diseases, he believes it is still a "cottage industry" in sore need of standardization. Existing model systems are "not proper scientific objects," because they're not easy to reproduce or build upon.

Several other efforts to standardize biological models for easy sharing are under way. The European Bioinformatics Institute (EBI) in Hinxton, U.K., for example, has gathered more than 50 models in its online collection, called BioModels Database (www.ebi.ac.uk/biomodels). It makes them freely available for downloading. Models in this collection are validated for internal consistency and annotated for legibility. But Gunawardena and his collaborators have something more ambitious in mind: a future in which not just biological models but all the pieces of models should be sharable. In this utopia, models should be able to swap computer code for protein cascades as easily as Mr. Potato Head swaps noses.

Posted Image ILLUSTRATION: TIM SMITH
Figure 1 Mix and match. Software designers are trying to create programs that would allow scientists to execute designs using standard biological components.

In concept, Little b bears a resemblance to computer-aided design (CAD) software, which allows engineers to model a new machine in virtual space by combining different parts from a menu of possibilities. The engineer can take the best stock part and tweak it if necessary to make the best fit. Similarly, Little b would allow biologists to build a reaction chain, cell, or organism in silico, picking and choosing proteins and cell parts from a menu of modular parts. If a novel part is needed, the modeler could alter an existing module or write a new one. Once finished, the new module could just be plugged in.

Modularity is a fairly sophisticated software concept. The Little b group "is looking at this from the standpoint … of software engineering," which is a good thing, says Michael Hucka, a computer scientist at the California Institute of Technology in Pasadena. "They're ahead of the curve." Both Hucka and Herbert Sauro, a systems biologist and software engineer at the Keck Graduate Institute in Claremont, California, are principal engineers on another effort to standardize biological systems models called the SBML project, for Systems Biology Markup Language. In the past 5 years, SBML has become the standard--and pretty much the only--way for computational biologists to exchange models in a mutually readable format. SBML and Little b are entirely different beasts, however; Little b is a modeling language, whereas SBML is a file format. Like a Word document or pdf, it's simply a way of encoding a model so that it is readable by different machines.

SBML allows computational biologists to e-mail their models to other researchers, who can then run the models on the SBML-compatible software of their choice. The BioModels Database at EBI has collected and curated the best SBML-compatible models. Several journals, including Nature, now require all biological models described in their articles to be submitted to the BioModels Database as well. But neither the database nor SBML requires modularity, as Little b does.

This aspect of Little b promises a unique mix-and-match flexibility. Consider how the program deals with the cell lattice, a construct that describes how cells connect to one another in two or three dimensions. A conventional computer simulation that explores the effect of different cell lattices on embryo development, for example, treats the lattice structure as an integral part of the model. It would be difficult to change from a square to an irregular grid without altering the entire program. The connectivity in lattices can affect a number of conditions--the shape of the cell, which chemical signals it is exposed to, how much light it gets--altering development.

Gunawardena's group has created a model of embryonic development but made the lattice a flexible module. In Little b, it can be altered or replaced without reprogramming any other part, allowing the same experiment to be run on a wide array of cell-lattice patterns.

Little b can also be used to model complete organisms. There's a Drosophila melanogaster living in silico on a laptop at Harvard Medical School, designed completely by Gunawardena's group in Little b. Many research groups are modeling entire organisms and might well make use of a modular approach.

Although it doesn't aim to model whole organisms, the integrative cancer biology program of the National Cancer Institute (NCI) in Bethesda, Maryland, is considering a modular approach to modeling. "It's the Holy Grail to take all these individual modeling components, plug them together, and get a comprehensive view of what's going on," says Daniel Gallahan, the associate director of NCI's division of cancer biology.

Other researchers agree that modularity is necessary if computational biology is to advance. Sauro, for one, decries the "huge waste of grant money" in the "chronic reinvention" of computer tools that employ one-off models. The redundancy is amazing, he says. For example, the BioModels Database has tens of models just of the MAP kinase cascade, a signal pathway in mammalian cells. And no matter how many MAP kinase cascade models there are available, or how well curated they may be, without modularity they're next to useless to a researcher who wants one ready-made to integrate into a larger model. There's simply no way to plug it in.

Whether Little b succeeds will depend in part on how it evolves and how widely it is accepted. Not everyone is enchanted with it, because it's based on an abstract language called LISP originally devised for artificial intelligence applications. Sauro suggests that the best way to get biologists to adopt a modeling language like this would be to give it a friendly graphical interface. No one has started a project like that, to his knowledge. Several computational biologists also say Little b's developers have not shared much information about their efforts thus far. Without input from the biology community, they ask, how can it satisfy everyone's needs?

Gunawardena says his group plans to begin teaching Little b to students at Harvard soon; he's preparing a paper that he hopes to publish this year. Then the test of community acceptance will begin in earnest.

#24 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 23 April 2006 - 07:31 PM

In this utopia, models should be able to swap computer code for protein cascades as easily as Mr. Potato Head swaps noses.


Eventually the above will not need to be an analogy, :)

As a software engineer, I agree that the LISP based approach is unfortunate. I suppose as long as it's well modelled at the lowest levels layers, GUI based model creation environments that allow non-coder biologists to quickly drag and drop reaction chains and sub pathways around can be built on top of them.

sponsored ad

  • Advert

#25 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,047 posts
  • 2,003
  • Location:Wausau, WI

Posted 17 August 2009 - 04:27 PM

At least there are some efforts to make systems biology more understandable to the human mind. Now I would like to see the graphical notation in flash with pop-ups explaining what each piece of the puzzle means - to make it even more understandable to a layman like me.

Simplified Systems Graphical Notation Proposed

SBGN will make it easier for biologists to understand each other's models and share network diagrams more easily, which, Hucka says, has never been more important than in today's era of high-throughput technologies and large-scale network reconstruction efforts. [Michael Hucka is a senior research fellow and codirector of the Biological Network Modeling Center at Caltech's Beckman Institute --ed.] A standard graphical notation will help researchers share this mass of data more efficiently and accurately, which will benefit systems biologists working on a variety of biochemical processes, including gene regulation, metabolism, and cellular signaling.






1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users