The journal impact factor (JIF) was originally developed in the 1950s and was meant to help librarians decide which journals to purchase for their institutions by measuring the number of times those journals were cited in scientific literature. Since 1961, Thomson Reuters has published this in their yearly Journal Citation Reports (JCR), according to the article The Journal Impact Factor: Moving Towards an Alternative and Combined Scientometric Approach. However, the authors of this article note that the heavy emphasis placed on a high JIF has led to the rise of predatory publishers who deliberately misreport their JIF — or even create fake “impact factors” — to make a profit off of unwary researchers.
How to Spot Predatory Journals
The authors of the article Preserving the Integrity of Citations and References by All Stakeholders of Science Communications note that the rise of these predatory journals has largely been driven by the “publish or perish” dictum predominant in academia, where careers, promotions, and funding can depend on publishing research. This pressure has made it possible for these predatory journals to take advantage of unwary researchers.
To identify these problematic publications, look for these hallmarks: contact information and addresses that do not match up, indiscriminate invitations for researchers to submit their work, and publication topics that span many disciplines.
The authors also note that another hallmark of these predatory journals is that they will make inflated claims about their impact factors, often claiming to have higher metrics than older, established journals — or they will claim to have a “global” or “universal” or “unofficial” impact factor that is not backed up by calculations from Thomson Reuters.
What Makes a Journal “Predatory”
In the article Dangerous Predatory Publishers Threaten Medical Research, Jeffery Beall (who became famous for the Beall’s List, his work on predatory publishers) notes that it is becoming more and more common for scholars to receive online invitations to either submit manuscripts for easy publication or to sit on a journal’s editorial board. Often, these publishers will claim high impact factors and impressive metrics as a way to attract unwary authors.
Beall defines predatory publishers as those that make a profit off of researchers by using open gold access — that is, having researchers pay to be published even though free public access is provided to that journal. Although the websites of these journals appear similar to high-quality journals, there is no peer review process to support them, and the editorial policy is to accept as many manuscripts as possible to increase profits.
Again, the impact factor plays a part in these deceptive practices: Predatory journals claim they have impressive impact factors, even though they don’t. Beall notes that there are now companies who will supply such fake metrics to these journals.
The pressure in academia for scholars to “publish or perish” has led to the rise of predatory journals that lure unwary researchers into publishing their work with them for a fee. These journals often use fake metric data that mimics the JIF to bolster the claims of their impact. Not surprisingly, problems with misguided impact factors have spurred JIF critics to call for alternative methods to assess the true measure of a journal’s impact.