• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans

Photo

Most cancer and health research is junk science

ronald bailey flawed research cancer research junk science

  • Please log in to reply
24 replies to this topic

#1 Mind

  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 04 April 2012 - 09:46 PM


If you cared to follow some of the more controversial threads here at Longecity, this should not come as a surprise - most research results are junk science. Many astute posters here at Longecity are quick to point out flaws in various studies and it turns out they are probably correct about the validity of the results. MOST cancer and health study results cannot be reproduced. Nine out of ten cancer study results cannot be reproduced!!

This reminds me of the yogurt thread where the mice lived but a fraction of the days that well-cared for lab mice live, yet it was touted as a breakthrough lifespan study by its authors.

And it dovetails nicely with this thread about mouse lifespan studies.

Some choice quotes from the article:

The two note that they are not alone in finding academic biomedical research to be sketchy. Three researchers at Bayer Healthcare published an article [PDF] in the September 2011 Nature Reviews: Drug Discovery in which they assert “validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced.” How bad was the Bayer researchers’ disillusionment with academic lab results? They report that of 67 projects analyzed “only in 20 to 25 percent were the relevant published data completely in line with our in-house findings.”
Perhaps results from high-end journals have a better record? Not so, say the Bayer scientists. “Surprisingly, even publications in prestigious journals or from several independent groups did not ensure reproducibility. Indeed, our analysis revealed that the reproducibility of published data did not significantly correlate with journal impact factors, the number of publications on the respective target or the number of independent groups that authored the publications.”


Given all the brouhaha [PDF] over how financial interests are allegedly distorting pharmaceutical company research, it’s more than a bit ironic that it is pharmaceutical company scientists who are calling academic researchers to account. Back in 2004, an American Medical Association report [PDF] on conflicts of interest noted that reviews comparing academic and industry research found, "Most authors have concluded that industry-funded studies published in peer-reviewed journals are of equivalent or higher quality than non-industry funded clinical trials.” In an email, Begley, who was an academic researcher for 25 years before joining Amgen, agrees, “My impression, I don't have hard data, is that studies from large companies is of higher quality. Those companies are going to lay down millions of dollars if a study is positive. And they don't want to terminate a program prematurely so a negative study is more likely to be real.”
These results strongly suggest that the current biomedical research and publication system is wasting scads of money and talent. What can be done to improve the situation? Perhaps, as some Nature online commenters have bitterly suggested, researchers should submit their work directly to Bayer and Amgen for peer review? In fact, some venture companies are hedging against “academic risk” when it comes to investing in biomedical startups by hiring contract research organizations to vet academic science.


The FDA/big Pharma nexus is a legitimate concern, however, big pharma has A LOT at stake when they pursue new drugs, and thus (like Amgen) absolutely need to reproduce academic studies to make sure they are legit. This is a good thing.

It goes to show how difficult it is to conduct a robust relevant study.
  • like x 3
  • dislike x 1
  • Ill informed x 1

#2 jadamgo

  • Guest
  • 701 posts
  • 157
  • Location:USA

Posted 14 April 2012 - 05:09 PM

This is a major problem with published research across a vast variety of fields. The peer-review process fails to catch it in all but the best journals.

One moral of this story is that you do NOT understand the results of a study if you just read the abstract. You MUST review the research methods and interpret the statistical data yourself. If you don't know how to do that, I'm sorry, but you don't know how to understand a research study. You just can't believe the interpretation given by the researchers in the abstract or discussion & conclusions sections -- too many of them are flawed. Unfortunately, some scientists don't know how to logically interpret the data from their own studies.

Forum members who DO know at least the basics of research procedure and statistical analysis -- never forget the potential you have to help people understand the science better. Even if they don't want to hear what you're saying, which is often "The science didn't say anything useful this time."
  • like x 2
  • Agree x 2

sponsored ad

  • Advert
Click HERE to rent this MEDICINES advertising spot to support LongeCity (this will replace the google ad above).

#3 Danail Bulgaria

  • Guest
  • 2,213 posts
  • 421
  • Location:Bulgaria

Posted 15 April 2012 - 06:19 AM

jadamo, You are right, but the time, that one person can provide is not enough to do a review of the research methods and interpret the statistical data of all researches over one topic. So, unfortunatelly there is no way to cope with this problem.

#4 niner

  • Guest
  • 16,276 posts
  • 2,000
  • Location:Philadelphia

Posted 16 April 2012 - 11:15 AM

Mind, that is a great article you found at reason.com; thanks for linking it and for tying in those other threads. While we're on the topic, I'd like to mention two things. The first has to do with the relevance of preclinical research to humans. From the article:

Last week, the scientific journal Nature published a disturbing commentary claiming that in the area of preclinical research—which involves experiments done on rodents or cells in petri dishes with the goal of identifying possible targets for new treatments in people—independent researchers doing the same experiment cannot get the same result as reported in the scientific literature.


The first thing I want to say about evaluating research is that in vitro work is mostly irrelevant even if it IS correct. "In vitro" literally means "in glass", from back in the days when all labware was glass. Even if today it's really in plastico, it has the same problem: there's nothing in the experiment relating to absorption, metabolism, distribution, or elimination, and precious little relating to toxicity. These are the things that kill 99.9% of all potential drugs. In vitro results should be considered as pointers to future animal experiments, and nothing more.

The next level up in relevance is non-mammalian species. Results in flies and nematodes are interesting, but they are so far removed from humans that they are "almost in vitro". In bug-o, if you will. We all know that rodents aren't little people, but in the spectrum of research results, they are far closer to humans than they are to bugs or cells, and research done in rodents is far more relevant to humans. All the ADME/Tox issues that are missing in cells, and mostly missing in bugs are present in rodents. This brings me to my second point:

Don't throw the baby out with the bathwater. Don't fall victim to the idea that "all research is crap", because it isn't. Most preclinical work is crap. As soon as you start putting compounds into humans, you are in the clinic. The relevance to humans is (nearly) 100%, assuming the work is correct, and the likelihood of correctness is higher because clinical studies are not likely to be run by desperate assistant professors who are trying to get tenure. Things to consider in clinical work are: How were the patients selected? Clever patient selection can be used to make just about any point you want to make. How were placebo effects handled? You'd like to see placebo controls where neither the patient nor the researchers know who got the real stuff. This is known as a "double blind" control. Obviously, the researchers break the blind at the end of the experiment so they can evaluate the results, but it prevents them from inadvertently treating patients and controls differently. How many subjects are there? As the number of subjects increases, statistical relevance improves. Are multiple genetic populations, genders, and ages represented? If not, the results are less relevant to the non-represented populations, although they are still far better than any animal experiment. Are there any conflicts of interest? Who stands to make money here? If the researcher works for a large drug company, the company will get the money, but the researcher will still get their salary. That isn't a huge conflict. If the researcher is an academic or other lone wolf with a company on the side, watch out. It's the small business owners who really stand to get rich, and they are often looking to secure more funding with a nice result.
  • like x 2

#5 Brainbox

  • Member
  • 2,860 posts
  • 743
  • Location:Netherlands
  • NO

Posted 16 April 2012 - 11:53 AM

As an addition to the excellent post of niner:

Calling pre-clinical research "crap" in general might even be somewhat drastic.

Each type of research suites a purpose. Climbing the mountain towards a new curative or improvement medication, you need to start in a cost effective manner. "In vitro" is just a cheap start of a possible next step of this climb. Each step followed by a GO/NOGO decision.

It is the use of these preliminary results as if they are not preliminary that is the culprit. E.g. for home use to fulfill wishful expectations.

The "throwing mud" exercise cited in the post by Mind reveals another aspect of intentional skewing of research results and interpretations.

I do recognize the need for commercial development of medical therapies. However, this calls for a distinction between fundamental research and research regarding realization and implementation of promising therapies that have sufficiently relevancy in the real world of every day medical practice.

Now, it seems that the playground for the fundamental research gets claimed by commercial companies before objective results are commonly available. So that companies are able to gain lead positions on certain area's before others even did bother to look. And which leads to all sorts of frustration and irritation since the perception is that research parties (both academic and commercial) are falsely claiming land by planting flags all over the place. Although academic research nowadays is more and more commercially funded, this probably does not always lead to enhanced synergy. In the contrary; commercialization of fundamental research causes increased polarization. And the need to misinform bystanders or to put up smoke curtains to deceit competition.

This kind of business model seems to be aimed at the short term benefit of a few organizations in stead of the long term general benefit. These two types of interest do probably not contradict in all cases, but the recognition they do matter could be an incentive for further optimization.

[naive mode]
Could the open source paradigm provide a part of the solution of fundamental research?
[/naive mode]

Edited by Brainbox, 16 April 2012 - 01:29 PM.

  • like x 1

#6 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 16 April 2012 - 09:57 PM

Thanks for jumping in to the conversation you guys. Good points all around.

This caught my interest because there are many preliminary and in vitro results that make it to the pages of this forum, and they are often given too much weight, giving too much false hope regarding supplements.

Also I should point out that there is a difference between the usefulness of in vitro and animals models, and what this particular article was discussing, which is that these studies were not reproducible. They are pointing out that an alarming percentage of these studies have flawed design, incompetent personnel, or fraud/bias. They called it "junk science" which is kind-of harsh, but it is a problem.

Edited by Mind, 16 April 2012 - 09:58 PM.


#7 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 18 April 2012 - 09:25 PM

In my most recent podcast with Justin Rebo (Imminst, SENS, ImmunePath), he says one of his greatest challenges is sorting through all the garbage in published papers. He says they are full of bias and poor design.
  • Ill informed x 1

#8 joelcairo

  • Guest
  • 586 posts
  • 156
  • Location:Calgary, Alberta, Canada
  • NO

Posted 25 April 2012 - 07:37 PM

I read the original article in Nature but not the link you posted. One huge source of selection error in the claims being made is that the researcher was looking for novel, unexpected, promising results for his lab to initiate research on. What that says to me is that outliers with no supporting research are quite likely to me the result of error or misconduct or simply randomly coming up with a result that falls withing a 1% or 5% confidence level. But to sweepingly claim that 90% of all cancer research is junk science is just ridiculous.

Also, this person's claims are completely undocumented and he didn't even reveal which studies could not be reproduced. I have a few choice words about the ethics of that, but they're not really germane to the main discussion.
  • like x 1

#9 Florin

  • Guest
  • 850 posts
  • 30
  • Location:Cannot be left blank

Posted 09 February 2013 - 07:11 AM

A new paper is out claiming that only 14 percent of biomedical research is wrong.

#10 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 09 February 2013 - 01:30 PM

A whole lot of epidemiological studies that showed a decreased mortality rate among overweight people might have been flawed/junk.

#11 nowayout

  • Guest
  • 2,946 posts
  • 439
  • Location:Earth

Posted 09 February 2013 - 03:05 PM

A whole lot of epidemiological studies that showed a decreased mortality rate among overweight people might have been flawed/junk.


A whole lot of epidemiological studies are flawed/junk period. :)

And as for the rest, they get interpreted wrongly once they are set loose on the world. Epidemiological studies do not show causation.

Edited by viveutvivas, 09 February 2013 - 03:08 PM.

  • like x 3

#12 Deep Thought

  • Guest
  • 224 posts
  • 30
  • Location:Reykjavík, Ísland

Posted 26 January 2014 - 02:08 PM

Some more information on junk science.

Bad science and tamiflu:
A 2012 Cochrane review maintains that significant parts of the clinical trials still remains unavailable for public scrutiny, and that the available evidence is not sufficient to conclude that oseltamivir decreases hospitalizations from influenza-like illnesses.[6] As of October 2012, 60% of Roche's clinical data concerning oseltamivir remains unpublished.[14]
It may still be a useful drug for reducing the duration of symptoms, although for this use it still has yet to be compared with NSAIDs or paracetamol.[15]
Roche commissioned an independent reanalysis of its data in 2011. One of the authors had received income from an organization sponsored by Roche previously but they were not funded by Roche for this analysis.[16] They concluded that early oseltamivir use reduced the number of lower respiratory tract infection treated with antibiotics from 9.3% to 5.9% in hitherto healthy adults and children.[16] No benefit occurred in those without

This ted talk is worth a watch.
http://www.youtube.com/watch?v=h4MhbkWJzKk


Here are some articles that deal with the bias that certain physicians have as a result of being on the payroll of pharmaceutical companies:
It's in danish, but you will find the same trend among american physicians, european physicians as a whole... greed is a universal human emotion.

http://www.180grader...en-politiken-dk
http://www.mx.dk/nyh.../story/20142505

http://www.bmj.com/c...t/347/bmj.f4342


Of course, even these articles may be biased.

Edited by Deep Thought, 26 January 2014 - 02:11 PM.


#13 Luminosity

  • Guest
  • 2,000 posts
  • 646
  • Location:Gaia

Posted 27 January 2014 - 04:31 AM

Some research is just fraudulent, just faked. Other studies are deliberately created to get a bad result, like when a drug company pays for a study on a supplement and uses the wrong form in the wrong way at 100 times the dosage. They do that all the time. Then the New York Times publishes an article saying that supplement is bad, cause they sold their good name for a few advertising dollars. A lot of people on this site don't have what it takes to read a study with understanding. Unfortunately those are the same ones that sling them around like laser swords in a nerd fight. We should do something about that.
  • dislike x 3
  • like x 2

#14 joelcairo

  • Guest
  • 586 posts
  • 156
  • Location:Calgary, Alberta, Canada
  • NO

Posted 27 January 2014 - 10:34 PM

The problem isn't the science. Most studies are just fine, when interpreted in the context of the exact parameters being measured and the experimental conditions used. The problem is that most thinking is junk thinking.
  • like x 2
  • Agree x 1

#15 jadamgo

  • Guest
  • 701 posts
  • 157
  • Location:USA

Posted 13 February 2014 - 07:28 PM

Some research is just fraudulent, just faked. Other studies are deliberately created to get a bad result, like when a drug company pays for a study on a supplement and uses the wrong form in the wrong way at 100 times the dosage. They do that all the time. Then the New York Times publishes an article saying that supplement is bad, cause they sold their good name for a few advertising dollars. A lot of people on this site don't have what it takes to read a study with understanding. Unfortunately those are the same ones that sling them around like laser swords in a nerd fight. We should do something about that.


I'm surprised you got -2 for saying this. I don't agree with everything about this post but I completely agree with you pointing out the influence of big pharma on research.

For those open to learning how a company can influence a study, even one done by an outside team, here's how. Give free grant money so a team can study a compound, which you just so happened to develop.

If they publish that it works, give them lots more grant money in the future.
If they find that it doesn't work, consider giving them more grant money in the future, but ONLY if they turn the results over to you so you can keep it from being published. (Wouldn't your competitors love to pay these guys to publish that study? Not "pay money" of course, but "pay" them by promising future grants. How do you stop them? Take all the copies of the data and lock them away forever.)
If they publish the negative result, never give them grant money again. Perhaps tell other Big Pharma grantwriters, "Hey, don't give these people money to study your compound because they'll publish even if it's negative."

So even though the grant money to do a study is "free and unconditional," next year's grant money is NOT free and unconditional. Worse yet, the threat of getting blacklisted can seriously damage a scientist's career. Sure, you could still work on government-funded grant money, but if you were planning on working indirectly for Big Pharma then it would take years of hard work to get yourself back on track.

That's how big pharma can influence the publication of studies and hide negative results. You want proof that this happens? Easy! It's called "publication bias". By looking at the patterns of numbers in studies that get published, it can be proven that studies with positive results are getting vastly over-represented and those with negative or neutral results are under-represented.

By using the data from the positive studies, you can predict how many negative or neutral ones should be out there, but if you look, they aren't there. If you take the positive studies and add the predicted neg/neutral ones, now you have a prediction of how many total studies were done. Go look at the grants issued to do studies, and that number will be about right! This means a bunch of studies were in fact done, but mostly only positive ones got published.

Because this is a complex subject, the information isn't suppressed. You can google publication bias and find out all about it, or look it up in library books. So there's no reason to believe this just because I said it or just because it seems to make sense -- educate yourself and come to your own conclusion!

P.S. Publication bias includes more than just influence from companies. If you google it you'll find there are other reasons too, such as the fact that journals want to publish interesting results. "This happened" is usually more interesting than "this didn't happen." But for drug studies, the bias is much larger than for basic biology research, and outside influence is the only reasonable explanation. Also, many scientists have talked openly about this, especially ones who tried to publish negative results about drugs and got blacklisted, so you can read up on those interviews too.

#16 BlueCloud

  • Guest
  • 540 posts
  • 96
  • Location:Europa

Posted 17 February 2014 - 02:55 PM

http://www.newyorker...3fa_fact_lehrer
http://www.theatlant...science/308269/

P.S. Publication bias includes more than just influence from companies. If you google it you'll find there are other reasons too, such as the fact that journals want to publish interesting results. "This happened" is usually more interesting than "this didn't happen." But for drug studies, the bias is much larger than for basic biology research, and outside influence is the only reasonable explanation. Also, many scientists have talked openly about this, especially ones who tried to publish negative results about drugs and got blacklisted, so you can read up on those interviews too.


Indeed. There's a recent rant about this by Nobel prize Randy Schekman :
http://www.theguardi...-damage-science
http://www.theguardi...cience-journals

#17 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 06 April 2018 - 07:24 PM

Flimsy animal studies lead to poor/failed clinical trials in humans: https://www.sciencem...msy-animal-data

 

I somewhat agree with some comments here, that saying most health research is "junk", is a little harsh, but maybe it needs to be said. Maybe that is why medical progress is so slow.

 

I have more trust in our LongeCity Affiliate labs than most university or pharma labs. My feeling is that they are more thorough because they are more driven to find true rejuvenation therapies.



#18 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 06 April 2018 - 07:30 PM

Here is a great explanation of how a lot of research goes bad: 


  • Informative x 1
  • like x 1

#19 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 07 April 2018 - 11:12 AM

Also need to be wary of the cells you order when doing an experiment. It is amazing (scary) to find out how many results (using "bad" cells) have been published: http://blogs.science...campaign=buffer



#20 Jesuisfort

  • Guest
  • 37 posts
  • 3
  • Location:Berlin
  • NO

Posted 05 March 2019 - 07:32 PM

So  metformin dont work  for  cancer prevention and life extensionn ? 

aspirin too ?

Beta glucan

vitamin d ?

 

All of these  supplement / medication are worthless ?



#21 Dorian Grey

  • Guest
  • 2,159 posts
  • 973
  • Location:kalifornia

Posted 07 March 2019 - 06:00 AM

I believe the quantity as well as quality of studies provides value. 

 

For instance, if you type Curcumin into PubMed search, you'll get 12862 results. Is it possible curcumin is really just an inert substance, with no particular value at all in human physiology?  

 

While it is certainly wise to take isolated research papers with a grain of salt, I would hope we don't throw any babies out with the bathwater.  Published research is after all, the foundation of medical progress.  Discount it all as rubbish, & we'll wind up with little more than idle speculation.  


Edited by Dorian Grey, 07 March 2019 - 06:05 AM.

  • Agree x 1

#22 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 07 March 2019 - 06:59 PM

I believe the quantity as well as quality of studies provides value. 

 

For instance, if you type Curcumin into PubMed search, you'll get 12862 results. Is it possible curcumin is really just an inert substance, with no particular value at all in human physiology?  

 

While it is certainly wise to take isolated research papers with a grain of salt, I would hope we don't throw any babies out with the bathwater.  Published research is after all, the foundation of medical progress.  Discount it all as rubbish, & we'll wind up with little more than idle speculation.  

 

You have brought up a good point. Research that has been done over and over for decades and shows a statistical significance can be more trusted than the one-off studies that get all the media hype. This is true not only of curcumin, but vitamin D3 and a few other nutraceuticals. The same is true for the benefits of exercise and calorie restriction - enough studies have been done in enough species that one can have high confidence.

 

It just goes to show how difficult it is to do biomedical research and produce statistically significant results. Too many people put too much faith in one-off results which are notoriously hard to reproduce (as has been highlighted in this thread)


Edited by Mind, 07 March 2019 - 11:04 PM.

  • like x 1
  • Agree x 1

#23 Mind

  • Topic Starter
  • Life Member, Director, Moderator, Treasurer
  • 19,058 posts
  • 2,000
  • Location:Wausau, WI

Posted 28 January 2022 - 08:00 PM

The use of AI in medicine and drug discovery is being hampered by junk science/studies. Like they say in programming, garbage in-garbage out.



#24 sensei

  • Guest
  • 929 posts
  • 115

Posted 02 February 2022 - 08:11 PM

One of the major roadblocks to real studies is that pesky thing called medical ethics.

The second thing is time.

For many mechanisms that influence human disease on the macroscale, the timeframe for salubrious or deleterious results are measured in decades.

Useful results MUST be leveraged with corroborating evidence -- else generations will pass before any "acceptable" study data is available.

Comment/Reply

Edited by sensei, 02 February 2022 - 08:14 PM.


sponsored ad

  • Advert
Click HERE to rent this MEDICINES advertising spot to support LongeCity (this will replace the google ad above).

#25 sensei

  • Guest
  • 929 posts
  • 115

Posted 02 February 2022 - 08:17 PM

Correlation is not causation. Corroboration is not causation.

But, DO You want to wait 40 or 50 years to see the results of anti-cancer or pro-longevity therapies or interventions?

Unless and until we actually see a human live to be older than 123 years repeatedly we don't have prima facie scientific evidence that the intervention for longevity was effective. Do you want to wait that long??





Also tagged with one or more of these keywords: ronald bailey, flawed research, cancer research, junk science

1 user(s) are reading this topic

0 members, 1 guests, 0 anonymous users