In my last post, I discussed points brought up by Ezekiel Emanuel, MD, PhD, and John Ioannidis, MD, about research in medicine. The takeaway was that students are strongly incentivized to do research during their training, but those incentives don’t necessarily reward high-quality work. Furthermore, they don’t directly contribute to clinical skills and becoming an effective practitioner. As a result, students may be spending time and effort on projects that fail to maximize value for both themselves and for the medical system at large.
This inefficiency has several consequences. At the individual level, it means students spend more time in training (arguably 30-40 percent more), accrue more debt, and lack the opportunity to pursue other interests. At a societal level, it may contribute to the growing physician shortage and potentially limits the productivity of highly talented and well educated people.
The stakes are high when it comes to designing a system of medical education. After my last post, I spoke to several other medical students about the subject, and many of them felt that research requirements don’t align with their eventual goals. This got me thinking about how we can improve things, but before proposing any solutions, it’s important to understand the mentality that led to the status quo. So in this post, I want to delve deeper into why there are such strong incentives for research in medical training.
In reading and thinking more about the subject, I’ve identified four reasons. The most commonly cited one is that research is a means to a pedagogical end. It’s a way to teach students how to think critically about a problem, analyze available solutions, test those approaches, and then synthesize the resulting information. It’s the scientific method at work, and doctors have to use that method every day.
While true, this alone doesn’t justify medical training’s emphasis on research. It’s possible to develop those same skills through many intellectual pursuits, whether it’s working on a policy platform, developing a health education and outreach program, or even working in a corporate job, among other possibilities.
The second reason is that medical schools are typically part of a research university. As the name implies, one of their primary purposes is to do research – institutional prestige relies heavily on academic output. As members of this community, medical students are expected to participate.
But once again, this line of thinking doesn’t entirely explain why medical training should prioritize research to such a great extent. Consider two other professional schools at a university – business and law. Most students in these programs go on to become practitioners (just like most medical students go on to become practicing physicians). Students have the opportunity to conduct research, but the emphasis is on pursuing extracurricular activities relevant to their career plans.
Third, money likely plays a role in the incentives around research. Research projects bring in grant money. And returning to the notion of institutional prestige, programs that draw in more money get higher rankings. One of the most popular medical school rankings, the U.S. News Best Medical Schools for research, uses NIH funding as the single biggest factor in ranking schools (incidentally, research output is not relevant in the U.S. News law or business school rankings). Medical programs therefore have an incentive to encourage their trainees to work on research projects, and to eventually become researchers in their own right.
It almost goes without saying that the availability of money shouldn’t be the basis for unproductive work. The money (and prestige) should be allocated to incentivize activities that ultimately improve patient care, whether that is through research or otherwise. While it’s admittedly difficult to know which activities contribute the most, there is little question in my mind that we can do better.
Finally, there is a powerful historical basis for research in medical training. Once upon a time, the roles of physician and researcher were synonymous. The people who had enough education for one of those jobs were pushed into the other by necessity. Physicians would treat patients, and each treatment was itself an experiment. That mentality is still very much alive in medicine today: The ideal doctor is one who goes to clinic and saves lives in the morning, and then goes to lab and redefines the standard of care in the afternoon.
That doctor no longer exists. With a few exceptions, we now have professional clinicians and professional researchers. And the division of labor allows both sides to be more efficient.
With all of that said, this isn’t an argument to eliminate research opportunities in medical school. I have personally had many valuable research experiences and I expect those to continue as I move forward. Research is what drives progress in medicine, and physicians should know how to interpret new findings at the very least.
This also isn’t an argument that medical trainees should focus 100 percent of their attention on clinical training. Many medical students have a wide variety of interests and have the capacity to excel in multiple different arenas.
Instead, this is an argument that it’s worth questioning the assumptions that underlie medical training and starting a conversation about how we can do it better. Medical education is built on a 100+ year old model, and doctors now face much different challenges than they once did. The education system must evolve to allow today’s physician workforce to rise to the occasion.
This piece is re-posted courtesy of the Stanford School of Medicine’s medical blog Scope, and is part 2 of a 3-part series (Read Part 1 here and Part 3 here). Read the original piece and check out more writing on Scope here.