• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo
- - - - -

Accelerating Aging Research via Human Computation


  • Please log in to reply
11 replies to this topic

#1 niner

  • Guest
  • 16,276 posts
  • 1,999
  • Location:Philadelphia

Posted 11 August 2007 - 02:35 AM


Being someone that works (or worked) with proteins, can you think of any value generating utilities that software engineers could build and then put on the internet that requires no chemistry, physics or biology expertise and only relies on visualization skills? Something along the lines of Galaxy Zoo? It could even be more complex as long as it doesn't have to do lots of heavy duty physics work.

Everything along these lines that I've heard of (Galaxy Zoo, Stardust, and the use of Captchas to decode scanned text) is an image analysis problem. In biology, these tend to come up on the experimental side rather than the theoretical or simulation side. It needs to be a problem where the image is complicated enough that an algorithmic approach is difficult, but simple enough that non-experts can deal with it. There needs to be enough data that it would be worth the trouble to farm it out to a lot of people. Maybe the thing to do would be to create a distributed image analysis framework that made it easy for a scientist with the right kind of problem to submit the images and classification rules. Sort of an if you build it they will come approach. The kinds of problems that come to mind are things like cell sorting and cellular morphology problems, but I'm not an experimentalist so who knows what somebody might come up with. It's been my observation that nothing moves science along like better tools. Chromatography and PCR are a couple of examples, but great software that's freely available over the net could work wonders.

Edited by maestro949, 13 August 2007 - 11:39 AM.


#2 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 13 August 2007 - 11:39 AM

I split this from the lengthy Engineering Approach vs Fixing metabolism thread as it was starting to go off on a few tangents, me being the guilty party.

The concept of human computation seems that it would have some merit worth exploring further.

It needs to be a problem where the image is complicated enough that an algorithmic approach is difficult, but simple enough that non-experts can deal with it. There needs to be enough data that it would be worth the trouble to farm it out to a lot of people.


Exactly. Another type of project that would fall into this class is one where a computing algorithm could do it but the cost to build the software was prohibitive whereas displaying image and or data along with a small set of instructions would be relatively easy. I'd like to try and identify what types of problems fall into this class and see if we can't ferret out some good ideas.

niner - you mention cell sorting and cellular morphology. Did you have something in mind regarding these two as possibilities?

sponsored ad

  • Advert

#3 caston

  • Guest
  • 2,141 posts
  • 23
  • Location:Perth Australia

Posted 13 August 2007 - 12:44 PM

Moving my post here as well guys:

Within a decade we'll likely have the 3D conformation of every protein that the genome can generate.  The computational horsepower to traverse the myriad of interaction combinations between these is probably many decades away.


At the risk of having people tell me to put the bong down I was thinking about the universe while waiting for a bus today. If all the fundamental particles are actually just waves of energy and neutrinos have three different states yet actually oscillate between couldn't the fundamental particle actually be a qubit? Rather than explaining the human brain as a quantum computer lets explain the universe as a quantum computer and rather than trying to find life in the universe lets consider that the entire universe is living.

Now what if we invest a lot of time and energy into digital biological simulations only to discover that binary just isn't going to cut it?

I say this because it may actually be better to keep let our (telomerese induced / DNA repair up-regulated 13 protein allotropic expressed from the nucleus ) stem cells divide invitro then reintroduce the mitochondria for transplant back into the body. We need to compare digitally storing our biological information to keeping backup cells alive in optimum conditions.

Other time the cells could actually be kept dormant by starving them of oxygen. My unstanding is that a cell starved of oxygen does not die as it can be reanimated if the correct procedure is followed. The cells only really dies if its through necrosis or the mito triggers apototis.

I believe at the moment we still view DNA as classical storage with error correction but we are constantly finding out that it holds more information than we realise. You could make a genetic backup then 10 years later discover you are missing half the information.

Edited by caston, 13 August 2007 - 02:21 PM.


#4 caston

  • Guest
  • 2,141 posts
  • 23
  • Location:Perth Australia

Posted 14 August 2007 - 03:24 PM

Here is the howstuff works article on quantum computing.

Now tell me if we shouldn't consider using them for aging research.

http://computer.hows...um-computer.htm

Here are some more links about the idea that the universe is a quantum computer:

http://www.wired.com.../play.html?pg=4

http://en.wikipedia....ng_the_Universe

Edited by caston, 14 August 2007 - 03:37 PM.


#5 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 14 August 2007 - 07:27 PM

We'll use quantum-computers when they are here. They aren't so it's not worth investing much energy in them. Emphasis needs to be on designing strategies and tools for knocking down the barriers immediately in front of us. We will certainly have more CPU speed on our desktop and in the networks so utilizing that should be the priority. When quantum computers do arrive they will not be a panacea but rather one more tool to help us crunch the data.

As far as backing up DNA data. It's not a bad idea but the state information you're looking for isn't all there. Once cells specialize there is information all over the place. Google the term Epigenomic Information and you'll see what I mean.

To preserve your cells current state you would need to put many of the various cell-types on ice or for longer-hauls, cryobank them.

#6 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 15 August 2007 - 12:59 AM

Coincidently (for the start of this thread), here is a News story from a few days ago entitled "Aging as a Computing Problem":
http://www.technewsw...tory/58759.html

#7 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 15 August 2007 - 01:16 AM

Good find.

Computer scientists can make a huge impact on this area of inquiry and should work toward partnering with scientists like Lithglow.


Indeed. The faster we transform biology into an information science the better.

The more people we can get engaged at every level the better too. We should try to find ways to help non-experts participate in the process too. I've added this to my list of missions.

#8 caston

  • Guest
  • 2,141 posts
  • 23
  • Location:Perth Australia

Posted 15 August 2007 - 10:53 AM

We'll use quantum-computers when they are here.  They aren't so it's not worth investing much energy in them.  Emphasis needs to be on designing strategies and tools for knocking down the barriers immediately in front of us.  We will certainly have more CPU speed on our desktop and in the networks so utilizing that should be the priority.  When quantum computers do arrive they will not be a panacea but rather one more tool to help us crunch the data..


Well I actually began to put together an agrument that classical computers are almost uselses. It takes us several millions of dollars worth of gear and enormous amounts of energy just to simulate the potato mosiac virus for a few seconds.  Maybe if quantum computers come along and our code is portable to them e.g. they have C compilers, x86 emulators and JVMs then what we are doing is worthwhile.

Also I found the following on the d-wave website:

http://www.dwavesys....=bioinformatics

Imagine for a second that google had access to quantum computing. With an advanced enough state of the art they could replace their entire server farm with a single quantum machine.  Now what happens after that suddently yahoo and msn want it as well. So does amazon and the world banks and the telecommunications companies realise they are going to need quantum computers in order to compete.

What we need to do is make a very basic but programmable quantum computer. It could be a little as 12 qbits and make it available over the web for people to time share and experiment with writing software on that could later scale up to a larger quantum machine. The funding model could be either user pays, sponsorship or advertisements.

We could also make an advertisement (or webvertisement) set in the future that advertises quantum computers like they are a standard sever range. Something like IBM would have or Dell would have but applied to quantum computing.


As far as backing up DNA data.  It's not a bad idea but the state information you're looking for isn't all there.  Once cells specialize there is information all over the place.    Google the term Epigenomic Information and you'll see what I mean. 


thanks I'll check that out.


To preserve your cells current state you would need to put many of the various cell-types on ice or for longer-hauls, cryobank them.


Why freeze them when we can just starve them of oxygen until they are needed?

Edited by caston, 15 August 2007 - 11:47 AM.


#9 modelcadet

  • Guest
  • 443 posts
  • 7

Posted 15 August 2007 - 12:01 PM

What we need to do is make a very basic but programmable quantum computer. It could be a little as 12 qbits and make it available over the web for people to time share and experiment with writing software on that could later scale up to a larger quantum machine. The funding model could be either user pays, sponsorship or advertisements.


See D-Wave's 16 qubit machine, Orion. From Rose's blog,

One very cool thing that we’re planning to do in Q2/2007 is to provide free access to one of these systems to people who want to either develop or port applications to it…so if you have an idea for an app that needs a fast NP-complete problem solver, start thinking about what you could do with some serious horsepower.


Many people criticize D-Wave, but unlike Steorn, I believe they're actually a legitimate company. They have quite a patent portfolio, anyway.

#10 caston

  • Guest
  • 2,141 posts
  • 23
  • Location:Perth Australia

Posted 16 August 2007 - 12:41 PM

http://www.physorg.c...s106395871.html

#11 Ghostrider

  • Guest
  • 1,996 posts
  • 56
  • Location:USA

Posted 23 August 2007 - 01:21 AM

That article did not have much content. It ended here:

The idea that aging is a disease will someday be as common as an online game. In the meantime, important advances in fighting age-related disease are in the works and computer scientists are playing an important role. That role should be allowed to expand, without political interference.


I wish it would have elaborated more on the second sentence above.

sponsored ad

  • Advert

#12 maestro949

  • Guest
  • 2,350 posts
  • 4
  • Location:Rhode Island, USA

Posted 12 December 2007 - 01:11 PM

The Economist has an article about the concept of human computation. They refer to it at "Citizen Science."

Spreading the load
Dec 6th 2007
From The Economist print edition

Computing: A new wave of science projects on the web is harnessing volunteers' computers in novel ways—and their brains, too

WAY back in 1999, a badge of geek pride was to run a new screensaver program called SETI@home. This used spare processing capacity on ordinary PCs to sift through radio-telescope data for signs of extraterrestrial intelligence. The bad news is that so far, not a peep has been heard from any little green men. The good news is that SETI@home is still going strong, with over 3m contributors, and is being joined by a rapidly growing legion of other volunteer computing projects supporting worthy scientific causes.

The choice is bewildering. Your PC can help design drugs against AIDS, model the future climate of the planet, search for new prime numbers or simulate micro-devices for handling satellite propellant, to cite just a few examples. Part of the boom in volunteer computing is due to an open-source platform for running such projects, called BOINC (Berkeley Open Infrastructure for Network Computing), launched in 2002 by David Anderson, the director of SETI@home. Today over 40 BOINC projects are in operation, with 15 in the life sciences alone. IBM, which runs a philanthropic initiative called World Community Grid and has signed up over 800,000 volunteer computers, is switching all the humanitarian projects that it supports to run on BOINC. These include Help Conquer Cancer, Discovering Dengue Drugs and AfricanClimate@home, which the computer giant runs on behalf of university research groups that need lots of computer power for their research.

But numbers are not all that matters. BOINC also makes it easier for anyone with a research idea to gain access to distributed-computing power. Two years ago, at the age of 18, Rytis Slatkevicius launched a project called PrimeGrid, which has since assembled possibly the largest database of prime numbers in the world, and has broken several records: last August, for example, it found the biggest known example of a special kind of prime number called a Woodall prime. In his native Lithuania, Mr Slatkevicius is a soft-spoken business student by day, but in the evenings he manages servers for his project, eking out enough to cover his costs from Google Ads, sales of mugs and T-shirts, and donations from supporters.

Another development that is boosting volunteer computing is the use of devices other than PCs, in particular games consoles and the powerful processors they contain (see article). This has been demonstrated most spectacularly by a project called Folding@home, run by Vijay Pande and his team at Stanford University, which simulates protein folding and mis-folding—a cause of diseases such as Alzheimer's. In September the combined computing capacity of the project passed one petaflop—a quadrillion mathematical operations per second—something supercomputer designers have dreamed of for several years. With just over 40,000 PlayStation 3 volunteers, Folding@home entered the record books as the most powerful distributed-computing network on Earth.

Along with a rapid increase in the number and diversity of research projects to which they contribute, there has been a marked improvement in the software that binds the volunteers together into groups. They can share information and opinions about the science behind the projects they are supporting, and perhaps make new friends in the process. Matt Blumberg, a BOINC expert based in New York, has made a click-and-play portal called GridRepublic for a host of projects, to encourage more non-techies to get involved. BOINC even has a volunteer help desk where experienced users can advise newcomers via Skype, a free internet-telephony service.

As well as collaboration, there is also a strong element of competition among computing volunteers. Like online gamers, they can compete individually or in teams to rack up the most processing time for a given project. Some enthusiasts fill their garages with PCs just to get a shot at being user of the week. And a new generation of projects takes the concept of volunteer computing to a higher level of user interaction by allowing volunteers to get involved in analysing data—in effect, donating spare brain capacity, too.


“Volunteer computing is a huge untapped resource, not just a clever publicity stunt.”

Take, for example, the Galaxy Zoo project, where volunteers have been helping astronomers to classify the shapes of galaxies from images taken by the Sloan Digital Sky Survey, an international collaboration which is mapping a large section of the visible universe in unprecedented digital detail. Thanks to the exquisite pattern-recognition capabilities of the human brain, amateurs with just a little training can distinguish between different types of galaxy far more efficiently than computers can. The project started in July to little fanfare, but news of it spread rapidly on the web, and more than 100,000 volunteers classified over 1m galaxies in a few months—a task which would have taken a lone astronomer years of unbearably tedious effort. Galaxies are traditionally divided into spiral and elliptical categories, but how one evolves into the other remains controversial. Better statistics might help to shed light on the nature of galactic evolution.

The researchers behind Galaxy Zoo, a collaboration between research groups at Oxford University and Portsmouth University in Britain, and Johns Hopkins University in America, are already writing up the first papers based on the galaxies classified so far. They have also submitted requests for viewing time on big telescopes in order to follow up on some of the more unusual discoveries made by volunteers. Plans are in the works for a second phase requiring more detailed analysis and drawing on other image banks too.

Citizen science meets Moore's law

Of course, there is nothing new about networks of amateurs helping scientists do their jobs. Ornithologists rely on bird-watchers to keep track of changing patterns of migration, astronomers have long profited from enthusiasts scanning the skies to spot new comets, and archaeologists benefit from amateurs' finds. But the potential for such citizen science is expanding rapidly because of Moore's law—the doubling of processor power every 18 months or so—and a similarly speedy growth of the bandwidth available to ordinary internet users. People with no special tools other than a PC and a broadband internet connection can take part in complex scientific projects from the comfort of their own homes.

The easiest part is getting the public involved. Most volunteer-computing projects can draw on tens of thousands of people with practically no advertising, relying on word of mouth. The problem is usually keeping these eager amateurs busy. The Galaxy Zoo project was initially overwhelmed by the public response, and had to upgrade its servers and computer network to cope with the demand for images, which reached peaks of 70,000 per hour. Chris Lintott of Oxford University, lead researcher on the project, says he was thrilled by the public's reaction. “We've had complaints that the site is addictive, as you never quite know what the next image is going to reveal,” he says.

Then there is the question of ensuring that what the volunteers do is scientifically valid. Most of the projects, whether powered by processors or by brains, rely on independent validation of a result by several volunteers. In the case of Galaxy Zoo, for example, each image was viewed by over 30 volunteers, who proved just as accurate as checking by a professional astronomer. Indeed, scientists often find the tables are turned, with some of the more technically minded volunteers spotting bugs in their computer programs and even helping to fix them.

Searching for aliens with SETI@home; modelling the climate with climateprediction.net; sorting galaxies with Galaxy Zoo

Perhaps the biggest hurdle, though, is getting fellow scientists to accept that volunteer computing is a huge—and still largely untapped—resource, not just a clever publicity stunt. When Andrew Westphal of the University of California at Berkeley first talked to colleagues about using volunteer computing to spot the tell-tale tracks left by microscopic interstellar dust grains in tiles of porous aerogel, he met with considerable scepticism. Yet this was the problem facing him when a capsule returned to earth in 2006 from a probe called Stardust.

Starting in August 2006, the Stardust@home project enlisted some 24,000 volunteers to search images of the aerogel via a web-based “virtual microscope”. In less than a year they performed more than 40m searches and found 50 candidate dust particles, which scientists now plan to extract. When Dr Westphal presented the results at a conference in March, the impressive level of agreement for even the faintest tracks, each of which was spotted by several hundred independent volunteers, won over the sceptics.

Projects searching for cosmic dust or classifying galaxies clearly appeal to young cybernauts, but what of other, more mundane-sounding tasks? Fortunately the number of internet users is so large that some people, somewhere, are likely to find a particular volunteer project interesting. Getting enough volunteers to document plant specimens from the dusty 19th-century archives of British collections, for example, might seem like a hopeless task yet that is exactly what Herbaria@home is doing.

The project was launched last year by Tom Humphrey of the Manchester Museum, and already some 12,000 herbarium specimens have been documented by volunteers. This typically involves downloading an image of a specimen, deciphering the various comments that experts have written next to it in longhand, and entering this information in an organised fashion on a website. The project started with specimens from the Shrewsbury School herbarium, but has ambitious plans to expand to collections at universities and museums at home and abroad. Although this may not seem high-tech, the project relies on very high-resolution digital images, and ordinary internet users' ability to download and display them—something that would have been unfeasible just a few years ago.

Bossa nova

To lower the barrier to entry for projects like this, Dr Anderson recently launched a new open-source platform called BOSSA (Berkeley Open System for Skill Aggregation), which aims to do for “distributed thinking” what BOINC has done for distributed computing. One of Dr Anderson's first customers for BOSSA is Peter Amoako-Yirenkyi of the Kwame Nkrumah University of Science and Technology in Kumasi, Ghana, who is working with other African researchers and a research group called UNOSAT, which processes digital-satellite data for various United Nations agencies.

The project, which is part of an initiative called Africa@home co-ordinated by the University of Geneva, will enlist volunteers to extract useful cartographic information—the positions of roads, villages, fields and so on—from satellite images of regions in Africa where maps either do not exist or are hopelessly out of date. This will help regional planning authorities, aid workers and scientists documenting the effects of climate change. Dr Amoako-Yirenkyi is excited by the prospects such projects open up for African researchers. “We can leapfrog expensive data centres, and plug directly into a global computer,” he says. Rather than fretting about a digital divide, researchers in developing countries stand to benefit from this digital multiplication effect.


Economist: Spreading the Load




1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users