Measuring Journal Impact? Consider These Three Alternatives
Prestige has always been the Holy Grail for manuscript placement. Authors pursue elite publications with the most subscribers and highest number of citations — all algorithmically measured by journal rankings (Journal Impact Factor, H-Index, Eigenfactor, and other citation indices).
But while such impact scores have their merits for the authors themselves — whose paychecks, reputations, grants, and very careers notoriously depend on the journals they reach and the citations received in their wake — those types of quantitative metrics only tell half the story for Medical Affairs professionals.
What are they missing? In many cases, traditional high-impact journals check the “medical reputation” box, but what they tout in prestige they may lack in scientific relevance. To understand why, it’s important to recognize how these influence rankings started some five decades ago.
How Rankings Rose to Dominance, Then Fell to Fraud
Born in the 1960s as the Science Citation Index (SCI), the idea for scholarly journal rankings had a simple goal: to more easily store and find major works in library cataloging systems, noted the information scientist Eugene Garfield in a 1970 essay from Nature.
Put simply, the explosion of evidence-based science over the past several decades has brought with it a publish-or-perish mindset — and a series of relentless and ambitious pursuits from researchers who went after the “best-ranked” publications. In addition, international organizations adopted pay-for-impact models to reward investigators for authoring in top-ranked publications. Predatory journals or citation cartels would falsely inflate their own metrics. And authors and publishers alike tried other back-channel tricks to influence citation-based metrics, all to play the “high-impact game.”
That wave of less-than-ethical practices has also seduced publications to favor “sexy papers,” a descriptor that University of California Berkeley biologist Michael Eisen famously coined in his exposé about a fake manuscript that he circulated to test one journal’s peer-review process. Despite the strategically-placed shoddy science, the publication accepted Eisen’s manuscript (which linked arsenic in DNA to a greater understanding of alien life). It was the same journal that printed its own issues with impact factors some three years after Eisen’s essay.
Eisen’s exploits demonstrate the prime risk for Medical Affairs professionals who wholly rely on influence scores. After all, articles with eyebrow-raising, sexy topics often catch the attention of researchers who go on to cite old papers in new ones, artificially improving the journals’ rankings. What we’re left with is a system that values citation metrics and reputation over research quality and relevance.
It’s also worth noting that pharma — or healthcare, for that matter — hasn’t been alone in this struggle. Other industries, including higher education and astronomy, have been dealing with the limitations of citation-based metrics and their impact on research quality.
Alternatives That Go Beyond Traditional Impact Metrics
Of course, most scientific publications do employ rigorous peer-review processes and, in many cases, rightfully earn their impact scores.
But in order to pair scientific quality with prestige, Medical Affairs professionals who are responsible for the relevant dissemination of science should also consider these three alternatives, which can complement publication scores for a full-picture view of Medical Affairs metrics that matter:
We’re not saying publications aren’t the place to go for sound scientific research. Quite the opposite. But when considering a journal’s worthiness on the pedestal of systematic metrics, relevance usually tells a fuller story than prestige. After all, having 10,000 relevant and engaged readers holds more stakeholder value than 100,000 irrelevant ones.
Start by considering your objectives and rank journals with regard to scientific relevance. In many cases, that might include searching with specific parameters for disease state, mechanism of action, population size, study site, and more. These specific search filters can identify the publications most suited to your research to give it the runtime it deserves, rather than searching for journals solely by their influence.
2. Medical Conferences
You might find that impact factors omit event-related data from their assessments, which represents a huge miss — especially given the authority and thought leadership medical conferences can provide. As a Medical Affairs professional, such events represent a goldmine of opportunities to connect, network, and disseminate the science to a room full of engaged and relevant ears. In addition, it’s crucial to keep track of research published at conferences.
Start with targeted parameters that select for conferences whose attendees and speakers most align with your own research objectives — whether by disease area, research type, or scientific philosophy.
A paper’s success is only as good as its investigative team, so why do rankings omit those who architected and executed the research in the first place? Citation-based models rank the publications, not the authors — which doesn’t given Medical Affairs professionals an opportunity to find those high-ranked individuals who might shepherd their trials to the marketplace.
Plus, for those who are seeking the individual points of view from key opinion leaders, there’s indeed a shortage of them in medical journals anyway, as one researcher recently wrote in MedPage Today.
So how do you reach these influencers if the rankings do not include them? Try parameter-based research. (If you’re sensing a theme here, you’re right.) Medtech databases offer an expansive trove of text-mining opportunities that go beyond impact factors.
A Final Word on Rankings
As Medical Affairs shifts beyond the job roles it once held — giving rise to what we’ve called Medical Affairs 2.0 — the expectation to disseminate science to qualified, relevant stakeholders persists.
But don’t mistake relevance for prestige, an error often made when considering impact factors in a vacuum. While those influence scores do have their merit, any strategic syndication plan should consider influence scores along with other publication metrics and non-publication sources.
Above all, start with your objectives and plan based on those. With that direction, you might just find that your own Holy Grail differs from those touted by rankings alone.
SHARE THIS POST