The scientific method is a set of tools for thinking about and investigating the natural world. Scientists make hypotheses about how the world works and then conduct experiments to test them. To be testable, hypotheses must be falsifiable. That is, it must be possible to design tests that can either support them or refute them.
Q. What does that have to do with health?
A. The scientific community seeks to test the validity of ideas about the nature and treatment of disease. Judgments are based on the scientific method. Over the last 150 years, most of the progress in medicine—and all the other sciences—has resulted from its use.
Q. Who makes the judgments?
A. Scientists who conduct experiments they consider significant usually report their results to a peer-reviewed journal. The journal editor sends copies to other scientists who are experts in the same field. They check whether the work is accurate, up-to-date, and adheres to the principles of scientific investigation. The paper is then accepted, rejected, or returned to the author with suggestions for revision. Peer review thus serves as a tool for weeding out sloppy work and unwarranted conclusions. Publication in a peer-reviewed journal indicates that the paper has met that journal’s standards. Of course, not all journals enjoy equal status in the scientific community. Publication by a journal like Nature, Science, the New England Journal of Medicine, or JAMA (Journal of the American Medical Association) is quite a feather in a scientist’s cap!
Q. What about testimonials? Can’t personal experience demonstrate what works?
A. “Testimonials” are personal accounts of someone’s experiences with a therapy. They are generally subjective: “I felt better,” “I had more energy,” “I wasn’t as nauseated,” “The pain went away,” and so on. Testimonials are inherently selective. People are much more likely to talk about their “amazing cure” than about something that didn’t work for them. The proponents of “alternative” methods can, of course, pick which testimonials they use. For example, let’s suppose that if 100 people are sick, 50 of them will recover on their own even if they do nothing. So, if all 100 people use a certain therapy, half will get better even if the treatment doesn’t do anything. These people could say “I took therapy X and my disease went away!” This would be completely honest, even though the therapy had done nothing for them. So, testimonials are useless for judging treatment effectiveness. For all we know, those giving the testimonial might be the only people who felt better. Or, suppose that of 100 patients trying a therapy, 10 experienced no change, 85 felt worse, and 5 felt better. The five who improved could quite honestly say that they felt better, even though nearly everyone who tried the remedy stayed the same or got worse!
Q. I still don’t see how scientists can be any more accurate. Aren’t they just offering their observations as “testimonials”? How do we know they aren’t mistaken?
A. Scientists use randomized controlled trials (RCTs) to solve this problem. RCTs examine groups of patients and use statistics to determine what works. To make reliable conclusions, scientists use several ‘rules’:
Inclusion criteria must be strict. That is, they make certain that the people studied actually have the condition they are trying to treat. If you’re trying a new remedy for cancer, but you don’t in fact have cancer, your experience won’t be very helpful to those who do.
All (or nearly all) the people in the trial must be accounted for. We can see why this is important if we return to our example of the disease in which 5% of the people get better. If you just hear about the 5 people who got better, you might be convinced that the therapy is a great idea. But, what if the other 95 people given the therapy got worse than they would have without it? Suddenly, the 5% doesn’t look quite so rosy!
The people being treated are compared to a control group. This lets us compare the group getting the therapy with patients not getting the therapy. For example, if in our example 5% of the treatment group got better and 5% of an untreated control group got better, we could conclude that the therapy was ineffective. If 5% in the treatment group got better but 10% in the control group got better, we might decide that the therapy was actually causing harm. Notice that even when this study demonstrates harm (twice as many people get better without the remedy as with it), there still could be some people who could testify they were cured!
Finally, randomized controlled trials aim for objectivity. Scientists try to measure the progress of the disease without referring only to how the patient “feels,” since feelings can change even if the disease is staying the same or getting worse. To increase the objectivity, patients are assigned randomly to the control or treatment groups to avoid the bias of putting patients which the scientist thinks will do well into the treatment group. Ideally, neither scientists or patients should know who receives what until the experiment is completed—a setup called “double-blind” testing.
Q. Why all the concern about the control group and the random assignment? Wouldn’t it be simpler to just give the treatment to the patients and see what happens to them? After all, we know that the people without the treatment won’t get any better!
A. Good point. This issue is at the heart of the RCT. In the 1950s, scientists found that roughly one out of three patients would feel improved even when given a pharmacologically inert substance such as a sugar pill. This is called “the placebo effect.” The way we perceive our body’s experiences can be altered by our state of mind and our beliefs. The number of people who respond to placebos can be even higher, especially if the patient or the doctor giving the treatment fervently believe it will work. This is why we use a “control ” or “placebo” group—the group being tested gets, say, the pill we want to study, and the control group gets a sugar pill. Both groups might show some improvement, but if they both improve by the same amount, then we conclude that this is from the “placebo effect.” We randomize patients to one group or the other for the same reason—so that the scientist does not know who is getting which therapy. As mentioned, the beliefs of the doctor giving the therapy can increase the placebo effect, so randomizing ensures they will treat everyone equally.
Q. Are you saying that testimonials aren’t good for anything?
A. No. Testimonials can be great places to start looking for answers, but they should not be considered the end of the journey. Many scientific discoveries start with an observation that leads to a hypothesis that eventually can be tested with a randomized controlled trial. However, people who use testimonials probably have little better to offer. After all, it is possible to get a testimonial from someone for nearly anything. In the 19th century, quack doctors sold medicines that were radioactive or gave patients bits of radioactive metal to wear near their skin. Many patients gave enthusiastic testimonials. They may have sincerely felt they were better, but experience showed that it wasn’t doing them any favors—it ultimately made them much worse.
Q. It sounds like you are suggesting that scientists are much wiser and smarter than other ‘normal’ people.
A. Just the opposite. The scientific method is not a way of saying that scientists have all the answers. Scientists use it because they realize how easy it is to be deceived or to fool ourselves even without knowing it, especially when we dearly want something to be true. That’s why science always tests remedies in a way that could show that they were ineffective. We should all be open to the fact that we could be wrong, and design our tests accordingly.
“Freedom of Choice”
Q. OK, I can see why scientists work the way they do. But this process takes time—don’t you think that some sick people that aren’t being helped by scientific medicine get impatient and want to try something else?
A. You hit the nail on the head. This is the issue for the patient, which is why I have considerable sympathy for those who seek out dubious therapies. However, I have less for those that peddle them without being totally honest and forthright. The key issue for the patient is, “What will help ME?” However, physicians, policymakers, and society have a somewhat different question. Society must deliver the best possible health care to the largest number of people, in a timely fashion, with only limited resources. So, it should attempt (through the scientific method) to determine which therapies are effective. Of course patients are free to do anything they like, but should society, insurance companies, etc. have to pay for anything that patients decide they want? If I decide that bathing in water filled with gold dust is the cure for my ailment, should you [as a taxpayer or insurance purchaser] have to foot the bill if the process doesn’t work? We also expect our doctors to give us good and reliable health care, both ethically and legally. Should physicians be held professionally or criminally responsible if they do not try, say, coffee enemas for cancer just because someone claims they help? Intelligent choice depends on the ability to separate what works from what is merely wishful thinking. The scientific method offers the best way to do this.
Q. Even with the scientific method, isn’t it possible that scientists could be motivated by unethical desires in getting their therapies proved? Don’t try to tell me that drug companies aren’t equally interested in getting their drugs marketed and accepted!
A. Exactly. That’s why claims must be backed by evidence. The scientific approach is designed to weed out ideas that we wish were true, but aren’t. Of course drug companies want their drugs used and sold—but they have to prove that they work and are safe enough. Why shouldn’t every therapy be held to the same standard? Anyone profiting from a remedy or cure has a vested interest in selling it. Scientific scrutiny is the only way to know if we are getting our money’s worth-a return to the days of unregulated health care (as in the 19th century) probably isn’t in anyone’s interest except those who want to make money without proving they are providing good value and honest advertising.
Q. Why shouldn’t we have a system in which people can go anywhere they want to get the health treatment they want?
A. Most people would say that we have such a system. The tougher question is who should pay for unproven treatments. What are the limits of that which should be covered by insurance or government intervention? Science is the only objective standard on which we can all base measurements.
Q. If I am looking for new shoes or a new car, I have many choices. Why should government regulate or interfere with the claims made for various health-care approaches?
A. The free market does permit people to seek out anyone or anything they want for health care (assuming it is legal!). There are several differences between between buying a pair of shoes or a car and choosing health care.
It is fairly clear what cars or shoes do when we examine them. We can try on the shoes, test drive the car, compare their appearance or specifications, etc. The trial-and-error period of deciding can be postponed indefinitely, and we are even responsible (in the absence of fraud) to be fairly sure of what we are purchasing. Indeed, the legal maxim “buyer beware” (caveat emptor) presupposes such an approach.
The need for health care is not readily controllable or postponable. People don’t plan to be hurt or sick; but it happens. We need the help now, not next week after we’ve shopped around a bit. And, time is often of the essence—trying on five pairs of wrong shoes before we find one that fits is no loss; trying five useless therapies before hitting on the right one isn’t so great a proposition! Furthermore, we cannot control the type of disease that we have and the treatment we will require. If I buy a car, I can settle for a Ford Pinto if I can’t afford a Mercedes. People in liver failure can’t decide to “settle” for a few aspirin if that’s all they can afford.
Treating disease always has an element of uncertainty. Scientific health care is based on a statistical approach that determines which therapies offer the greatest odds of helping. Because diseases can wax and wane, and because the body has a marvelous ability to heal itself, it is very difficult to determine through one person’s experience whether a therapy should be recommended to everyone.
It is difficult for non-experts (which is almost all of us, in most areas) to make intelligent decisions about healthcare. The body is mysterious to many people, and biology is probably among the most complex sciences. We depend on people with years of training and experience (hopefully) to advise us in areas in which we do not have the time, means, education, or (in some cases) even the consciousness to learn enough to make a truly informed and rational decision.
We count on our physicians to select what we need to understand to make a decision, and so we trust them to get it right. This is an enormous trust, which partly explains the respect given physicians and the abuse heaped on them when they fail us. Thus, there must be means in place to protect the public from those who would give inaccurate advice. The public is free to chose, but part of being “free” is the ability to clearly discern exactly what is being chosen.
There is, in my view, a “social contract” when one goes to a medical doctor. We ought to be able to trust that we are getting the best currently proven therapy. Patients should not have to worry about whether the physicians they choose are quacks. If they choose to go elsewhere, that is their right—but free choice is hampered if patients have no way to distinguish between proven and unproven/disproven therapies. If science-based medicine doesn’t meet your needs, you are of course free to look elsewhere, but you should realize that you are entering less-charted waters with much less assurance of reliability. It might be worthwhile to ask why some who sell methods which aren’t proven want to blur or hide that distinction.
Gregory Smith, B. Med. Sci., is a member of the MD class of 2000 at the University of Alberta (Edmonton). As of July 1, 2000, he will be a resident in Family Medicine at McGill University, Montreal. This article is based on exchanges on the healthfraud-discuss list, which is open to anyone who agrees to abide by its rules. Feedback to me is welcome.
This article was posted on August 22, 1997.