• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Can we model a cell?


  • Please log in to reply
12 replies to this topic

#1 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 16 February 2006 - 12:44 PM


Karen F. Greif, Ph.D.'s article poses the question and the challenges...

http://emergent.bryn...eChapterKFG.doc

#2 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 16 February 2006 - 10:06 PM

Argh ... I keep thinking this is a topic that's asking the question. Note to self: print doc.

sponsored ad

  • Advert

#3 maestro949

  • Topic Starter
  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 16 February 2006 - 11:19 PM

Argh ... I keep thinking this is a topic that's asking the question.  Note to self: print doc.


It's a question I'm trying to get to the bottom of. Trying to get my head around the boundaries as to what needs to be modelled and what doesn't and whether we have done enough reduction to the point where we can black box the emergent properties. Furthermore, even if we can build some robust models and then run simulations in silico can we even go back and determine whether they accurately model what happens in vitro? The quantity and complexity of pathways and proteins are certainly intimidating but if it's not infinitely complex, then it's doable. And I may just be mad enough to try :). I'm currently reading up on complexity models and searching for studies, whitepapers, etc of others that have attempted such nonsense but most of what I find are fairly small in scale and narrowly targeted at a particular domain. Even those are fairly sophisticated engineering efforts in their own right.

#4 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 17 February 2006 - 03:18 AM

The quantity and complexity of pathways and proteins are certainly intimidating but if it's not infinitely complex, then it's doable

Permutate the amino acids of a 10 kDa protein, and you end up with more possibilities than protons in the visible universe...

But still, evolution has done it, and so we can do it. If you think its the most efficient thing you can do with your time, then why not go for it.

#5 maestro949

  • Topic Starter
  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 17 February 2006 - 01:52 PM

Permutate the amino acids of a 10 kDa protein, and you end up with more possibilities than protons in the visible universe...


While we will want to search for more efficient proteins at some point, coming up with every combination of possible protein structures isn't necessary. We only have to model the thousands that are currently encoded in our DNA. Certainly a managable number.

But still, evolution has done it, and so we can do it.


I doubt life has even sampled a fraction of them through trial and error but has found those that are "good enough" to keep cells replicating in perpetuity. If evolution had sampled them all we'd be a lot less fragile and probably already immortal.

If you think its the most efficient thing you can do with your time, then why not go for it.


At this point I still think it's a rediculous notion that I personally will make much progress alone and even with a signifant team effort suspect that even after say five to ten years of modeling we'd end up with something that is a large collection of data points which when sims are built will only be capable of predicting adjustments with a very small rate of success. But in my opinion, even that would be more than good enough to set us on the path to some significant advances.

Once we get a model to a point of all known structures and pathways it would be a matter of keeping the model up to date as each new step forward in terms of reduction is made. The model itself will be able to yield incite into where additional reduction is needed and even help us predict structures, proteins, functions and pathways that are we are not yet aware of. As the model progresses in significance alongside advancements in computing power, biological knowledge and tools for creating non-deterministic algorithms we will be able to use it to develop simulations that vastly increase the turnaround time in testing theories and potential improvements.

It's the holy grail of life extension as far as I'm concerned. I'm rather dissapointed in the progress we've made over the past 25 years in medical science and it's application. Despite Kurzweil and other raving about an exponential explosion in progress I think we are actually starting to hit a wall in terms of significant breakthroughs and progress and that decoding the genome is mile one of a twenty-six mile marathon. And the walls we are hitting are due to complexity curves and innefective approaches to moving beyond them. A paradigm shift is needed towards transdisiplinary fields that cross the many traditional fields of science. A paradigm shift where this new combined approach is feeding it's findings and discoveries into many inter-connected models and meta-models.

Think of all the scientific information in google, Wiki and disperate scientific databases scattered throughout universities and labs using a single concise language and standards for mapping the interlocking pieces of information together and you can start to envision a network of datapoints that has enormous value. Algorithms could then mine the data and bring it to life, or at least a simulated life.

AI, Singularity, nanotechnology, transhumanism and significant life extension are all possible but we're on a very slow track to getting there. Models and simulations are the tools that will expedite our progress towards immortality. We will need to build new languages, we will need new tools for interfacing with them and they will require a large amount of input and testing to bring to fruition but aye... when they are built they will eventually be as alive as we are.

Edited by maestro949, 17 February 2006 - 10:05 PM.


#6 olaf.larsson

  • Guest
  • 583 posts
  • 21
  • Location:Sweden

Posted 02 March 2006 - 02:48 PM

One missunderstanding is that this it not possible becouse of the increadible comlexity of this system in real life.
We dont need compleete understanding of the sytem to be able to model it.

Do methorologist try to count the possition of every molecule in a huge cloud to be able to predict that it will rain?

Offcoutrse not.

#7 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 02 March 2006 - 04:07 PM

That raises an excellent point. If a successful model appears, is there any value in integrating more information when it arrives? Maybe.

I'd imagine that with modelling neurons, we want as much information as possible, because we don't yet know how critical each piece of information (or potential for variation) is.

PS: I wish meteorologists would incorporate more data. It seems to me that, with satellites, it should be alot easier to determine how much heat is radiating from each patch of land.

#8 maestro949

  • Topic Starter
  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 03 March 2006 - 03:47 AM

One missunderstanding is that this it not possible becouse of the increadible comlexity of this system in real life.
We dont need complete understanding of the sytem to be able to model it.


It's not so much a misunderstanding but rather an optimism that despite all the bright minds that scoff at the notion, there's only one true way to assure failure -- not trying. I've been pouring over quite a bit of the literature on the edge of macromolecular simulation (Springer publishes lecture notes in computational science and engineering for this), complexity theory and delving into readings on the microbiological structures and pathways. Stunningly complex is the challenge.

Huge quantities of variables, terabytes of data, many forces interacting simultaneously, a complex hierarchy of emergent structures, complex interleaving chemical reactions, abstract mathematical equations all coming together in a an extremely sophisticated multi-layer topological 3-D model eventually being stepped a femptosecond at a time in simulation. It boggles the mind to envision it all working. The computing horsepower obviously isn't available to build full simulations but that shouldn't stop us from building the models knowing that breakthroughs in computing technology will eventually be here. Building the models alone will take many years. Coming up with algorithms and state machines will take even longer. Trial and error on a small scale with model organisms like Mycoplasma genitalium, testing hypothosis and coming up with proofs can and should start now. I would argue this is already in progress to some degree in the areas of pathway predictions, protein folding, molecular modeling in drug design, etc. A more grandiose vision and goal is needed though and the sooner we get started the sooner we can make it happen.

Do methorologist try to count the possition of every molecule in a huge cloud to be able to predict that it will rain?

Of course not.


Correct but anyone who has watched the weather channel for 2 hours can watch clouds moving in a direction and predict where they are going be in the next two days. But we're not trying to predict where it's going to rain. We're trying to make it rain cherry cola. But I understand your point. Surely we will eventually be able to abstract many molecular interactions as periodic boundary conditions such that we don't need to calculate the quantum mechanics of every atomic particle but the model needs to account for them with extaordinary precision, otherwise the simulations would lose accuracy after a few milleseconds of execution. By breaking up many of the forces, pathways and molecular dynamics into topological layers that function independently yet inter-dependently by interconnecting at fixed checkpoints we can potentially achieve simulations without needing a quintillion gigabytes of memory and 200,000 supercomputers. Maybe just a 1000 or so ;) But hey, in 20 years linking 1000 supercomputers might not be that unrealistic. Further good news is that we only need 600,000 yottabytes to store all the atomic data of a single homo sapien. No problemo.

Bring it on! [thumb]

#9 maestro949

  • Topic Starter
  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 03 March 2006 - 03:58 AM

That raises an excellent point.  If a successful model appears, is there any value in integrating more information when it arrives?  Maybe.


Not maybe. Absolutely. Pick up any good microbiology textbook and every chapter will surely have sprinklings of "...but we don't know how that works yet." There's still a great deal we still don't know. I am so damn baffled that so many freaking complexities can exist in little cell. It's quite awesome. But I digress. Building a model today will have many large gaps that will be filled in as there are new findings. Take for example the fact that it was just discovered that hydrogen bonds show to play a conserved role in protein folding. Bing. Add that puzzle piece to the model.

#10 John Schloendorn

  • Guest, Advisor, Guardian
  • 2,542 posts
  • 157
  • Location:Mountain View, CA

Posted 03 March 2006 - 04:50 AM

We're trying to make it rain cherry cola

Three cheers, mate! [thumb]

#11 JonesGuy

  • Guest
  • 1,183 posts
  • 8

Posted 03 March 2006 - 04:52 PM

The computing horsepower obviously isn't available to build full simulations but that shouldn't stop us from building the models knowing that breakthroughs in computing technology will eventually be here


I'm quoting this so that something wise appears in some of my posts.

With the brain, especially, we are going to need detailed models. Right now, we have no idea what will be 'sufficient' modelling to make the changes to the brain that we all want. But more=better.

I'm starting to think that our real weakness is organising the data and algorithms. We have a LOT of data that we just cannot seem to integrate into something useful. How do we improve the quality of people in this field?

#12 olaf.larsson

  • Guest
  • 583 posts
  • 21
  • Location:Sweden

Posted 04 March 2006 - 01:22 AM

Well nobody in starts trying to model an eucaryotic cell. You start at modeling small systems with a few interacting proteins. Then you could go up to the level of the simplest bacteria. I suspect that there will be pretty good usefull models models of simple bacteria in 5-10 years.

When trying to model a larger system i guess data about the various small sub models, produced by many people, will be stored in databases in a standarised manner.

About the brain model I would like to say that you ofcourse don´t need compleete information about every neuron to model a brain. The thing you need to know is in principle that neuron A has an output 0 when it gets a input 1, neuron B has an output 1 when it get input 0 etc., plus information about how the neurons are connected.

Edited by wolfram, 04 March 2006 - 01:35 AM.


sponsored ad

  • Advert

#13 maestro949

  • Topic Starter
  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 04 March 2006 - 11:49 AM

With the brain, especially, we are going to need detailed models.  Right now, we have no idea what will be 'sufficient' modelling to make the changes to the brain that we all want.  But more=better.


The problem with modeling the brain is that the whole is greater than the sum of the parts and even when we finally achieve such a model the resulting in sitro simulation will not produce super-AI but rather just an better understanding of the physiology which IMO, will still not allow us to make any significant upgrades. Mind upgrades is sci-fi and pretty far off IMO. I'd rather fix the aging problem first and I think we are already smart enough to do that. Then we'll have plenty of time to work on tougher problems like achieving super intelligence.

I'm starting to think that our real weakness is organising the data and algorithms.


At the cellular levels the issues are related to quantitative data, such as kinetic parameters, dissociation constants, steady state concentration and flux rates. The larger issue I see is the growing data glut. It's going to grow exponentially and organization will be key. Getting the bioinformatics/proteomic data in order requires codifying standards and then getting people to agree to use them. The problem with standards is that it usually implies boundaries, bureaucracy and a large investment of time to learn and then finally comply with them. Seeing, or I should say "acknowledging", the forest from the trees isn't always easy when scientific research money and time is limited for academic research. The other major challenge is that organizations seeking profit will hoard information and it's aggregates as company secrets in order to maintain a profitable edge in their respective markets.

We have a LOT of data that we just cannot seem to integrate into something useful.


That "something useful" IMO is a model that is not only an enumeration of every protein, structure and pathway but how they all interconnect and behave in relation to each other (the physics and math). The challenge is how to build an open model that researchers can plug their findings into rather than writing a "paper" and simply dumping it into a journal. It's still a little cloudy but what's gel-ing in my mind is a collection of meta-models that are topologically independent but inter-related. i.e. some type of hierarchy of object models and bayesian networks where each node is either a structure or substructure. Another topology that could then be overlayed on top of that would be the pathways that can traverse these structures.

How do we improve the quality of people in this field?


Cream floats to the top. Sadly though they then get hired by private firms and their creativity is often wasted on seeking short-term profits.

Well nobody in starts trying to model an eucaryotic cell.


People usually only model what they plan to then followup with a simulation. The problem with limiting the modeling to simple bacterias is that the model you end up may need to be tossed out when you step up to a more complex cell type. Modeling the most complex cells and structures first despite the lack of algorithms and horsepower to simulate and then working back from there to simpler cells mitigates this issue to some degree.

I suspect that there will be pretty good usefull models models of simple bacteria in 5-10 years.


http://www.e-cell.org/ or E-Cell Wiki (user: guest, pwd: guest) looks like a promising step towards this...




2 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users


    Bing (1)