Bad science journals – nothing new here
Science journalism in Germany in the last days was awash with a report on “predatory publishers” and an integrity ‘crisis’ for German science. Journalists from regional media outlets, WDR, NDR, and Süddeutsche Zeitung, in collaboration with some international partners (e.g., Le Monde) released new information (overview and links by ARD Tagesschau, in German) showing that some authors from esteemed research institutes in Germany previously had articles published by journals who apply next to no peer review on article submissions.
The type of phenomenon has been well-known under the “predatory publishers” moniker, originally defined ten years ago by librarian Jeffrey Beall and his notoriously infamous list of such publishers. Some of the german journalists also elected to use the relatively new term #FakeScience, clearly modelled after the emergence of the ‘fake news’ phenomenon. (See also Robert Gast’s criticism in Spektrum and Arndt Leininger’s in Tagesspiegel, both in German — both suggest “Junk Science” as a better term.) Our colleagues from the Helmholtz Open Science bureau pointed out (FAQ, in German) that it’s easy for researchers to avoid these kind of publishers. For example, tools like ‘Think, Check, Submit’ are very useful in identifying legitimate journals for authors.
Since the value of mutual review of papers by peer researchers is well established, and since most authors usually will anyway avoid publishing behaviours that may somehow harm their reputations, it is no wonder that “predatory publishing” never really became a big thing, when seen in relation to the total outcome of scholarly communication. For instance, Martin Paul Eve and Ernesto Priego (2017) discussed that authors, even those who are duped by predatory publishers, are rarely actually harmed by such activities, and indeed the fear of harm is largely a myth perpetuated by traditional scholarly publishers that in reality exposes issues to do with the peer review system.
The recent media report doesn’t come with a release of the underlying original data, at least yet. (According to their own FAQ. NDR, in German.) Thankfully, Markus Pössel tried to replicate the data-driven part of the news story at Spektrum’s SciLog (in German); see also Raphael Wimmer’s complementary analysis of another data set. This is a sadly missed opportunity, as the story has broken (with all of the consequences) before the information has been able to be independently verified. Pössel’s article is a quick and insightful read. His results clearly challenge German media outlet’s #FakeScience, in his own words:
“For the statement that ‘Germany apparently plays a key role in this shady business’ (according to the Süddeutsche in a report on the research project) I find no evidence in my sample. Germany lies in the European average, does not stand out particularly with it and contributes to the total number of the articles evaluated here as said only 2.5%.”
What’s the big deal?
Although “predatory publishers” never became a big thing, how did they become a thing at all? First of all, it’s easy to see that “being published” by something that even just looks like a scholarly journal to many seems to be some “seal of approval” in the present scholarly ecosystem. Journal brands are used as a checkmark to sort out what gets into a researchers CV, and what legitimises them as an individual researcher. However, for many widely-known reasons, this belief is misguided, primarily as it defaults to a profoundly unscientific practice of assessing research based on the container it is communicated in, rather than any intrinsic qualities of the research itself.
Since recent years, research funding organizations all around the world essentially came to agree that research results should be published and valued to their fullest extent. This assessment explicitly includes those results or outputs beyond the comprehensive “story” usually delivered in journal articles. These artifacts resulting from research include, for instance, original research data sets or pieces of programming code. There is almost never an objective or selfless reason to wait for any of these artifacts to be published alongside journal articles, or even beforehand (or instead of). Indeed, quite the opposite is true, and in many cases meaningful research should best be witnessed by the public before the collection of empirical evidence even begins.
The evolution of publishing and peer review
In extension to this relatively young, but well accepted premise, one could go even another step further. Why not apply these criteria to the evaluation of any — even preliminary — results by the wider research community? When peer review itself happens as early and as open as possible, the chances are that it helps other researchers as well as all those who want to learn from the results. That is the reason why well esteemed publishers such as F1000 and BioMed Central introduced “open peer review” (OPR) years ago, a process which has since become a well-studied practice. Maybe the most obvious outcome of this is that OPR proves that a review took place, when, by whom, and coming to what result. It is the way to definitively draw the line between predatory and non-predatory practices.
This perspective is not some trendy idea, but based on broad evidence from research itself. Take for instance the digital repository arXiv. As a repository for so called preprints, it was started from within the particle physics community in the early 1990s, and one of the earliest functioning uses of Web technology. It became so popular in particle physics and neighboring fields since, that it inspired preprint repositories in other disciplines, like biorXiv. The concept behind it is always the same: Before even beginning the sometimes cumbersome process of submitting a paper to a journal, the article is made available to the public as a preprint. Many preprint repositories allow — and some even encourage — OPR on preprints directly, without being asked for by a journal editor. The core principles here are that knowledge is disseminated freely, rapidly, and anyone is able to participate in the engagement and review of that article, should they wish.
Who is afraid of predatory publishers?
Let’s face it: The peer reviewed journal article has become the basic commodity of an extremely successful business branch, the scholarly publishing industry. It is very well known by now that peer review practices greatly vary between publishers, countries and even whole disciplines. (See also Wolfgang Nellen in Laborjournal Blog, in German.) Still, it is all to often taken for granted that some research result is “valid” since it somehow survived some kind of peer review within a typically intransparent system, while another research result is not worth considering, since it doesn’t have that status. Predatory publishing tries to exploit this belief in the meaning of the “peer reviewed” checkmark by delivering just the good looking, familiar package, the article itself, plus the publisher’s suggestion it passed peer review, without any considerable review having taken place.
A shallow certainty is reproduced on a daily basis, mostly by senior researchers only considering “real journal articles” when assessing their peers for tenure, funding, fellowships or the like. (See also Klaus Tochtermann’s interview in Deutschlandfunk, in German.) And the same shallow certainty is being exploited by predatory publishers. Sadly, only the latter is easily and regularly discovered, and typically avoided by authors and readers who have reached some basic level of experience.
Transparency is the best cure
To fight this shallowness in scholarly communication, the same is true as in fighting corruption: “Sunlight is said to be the best of disinfectants.” (Louis Brandeis) When disciplinary repositories became an obvious means of exchange in even more disciplines, we would probably see much less of a need for shallow outlets for research results. (At least we don’t hear much about “predatory publishers” in particle physics, because arXiv. See also Pössel’s replication study mentioned before.) When openness of peer review became a default, publishers would have a hard time to fake it. And even more important: Peer review became a recognizable part of the conduct of research, potentially helping others to make better use of it.
So, rather than labeling research articles as predatory, and labeling it ‘fake science’, as if researchers have some agenda to deceive the wider public, there is one simple solution: encourage and support preprints, and most of all, publish your peer review reports. If you are a legitimate journal, you have nothing to hide, and potentially much to gain from publishing your reviews.
Some tools for researchers to enhance their OPR practices:
- hypothes.is allows you to annotate and review almost anything you find on the web, word by word, alone or as a group
- Publons and helps you to make your current and previous peer review pieces publicly available
- The Winnower helps you to find reviewers for whatever published piece of information you have.
More interesting reactions, all in German:
- Maximilian Heimstädt and Leonhard Dobusch, just like me, describe OPR as a potential remedy to predatory publishing
- Manfred Götzke’s comprehensive discussion at Deutschlandfunk with Jule Specht and others sheds a light on bad working conditions in science
- Florian Freistetter in ScienceBlogs discusses the dysfunctionality of the current scholarly journal system
Many thanks to Jon Tennant for contributing to this article.
tagesschau.de. ‘Recherche “Fake Science”: Wissenschaft Auf Abwegen’. tagesschau.de. Accessed 26 July 2018. https://www.tagesschau.de/inland/fakescience-101.html.
Oransky, Author Ivan. ‘Why Did Beall’s List of Potential Predatory Publishers Go Dark?’ Retraction Watch (blog), 17 January 2017. https://retractionwatch.com/2017/01/17/bealls-list-potential-predatory-publishers-go-dark/.
‘News about #FakeScience on Twitter’. Accessed 26 July 2018. https://twitter.com/search?q=%23FakeScience.
Gast, Robert. ‘»Fake Science«: Dieser Begriff Kann Der Wissenschaft Nur Schaden’. Accessed 26 July 2018. https://www.spektrum.de/kolumne/dieser-begriff-kann-der-wissenschaft-nur-schaden/1579216.
‘Fakescience – Eine Warnung Vor Dem Hashtag’. Causa Debattenportal. Accessed 26 July 2018. https://causa.tagesspiegel.de/kolumnen/causa-autoren-1/fakesciene-und-warnung-vor-dem-hashtag.html.
‘Helmholtz Open Science: FAQs about “Predatory Publishing”’. Accessed 26 July 2018. https://os.helmholtz.de/open-science-in-der-helmholtz-gemeinschaft/open-access-der-goldene-weg/faqs-zum-thema-predatory-publishing/.
‘Thinkchecksubmit’. Accessed 26 July 2018. https://thinkchecksubmit.org/.
Eve, Martin Paul, and Ernesto Priego. ‘Who Is Actually Harmed by Predatory Publishers?’ TripleC: Communication, Capitalism & Critique. Open Access Journal for a Global Sustainable Information Society 15, no. 2 (13 August 2017): 755–70. https://doi.org/10/gdvp27.
NDR. ‘#FakeScience – Fragen und Antworten’. Accessed 26 July 2018. https://www.ndr.de/nachrichten/FakeScience-Fragen-und-Antworten,fakescience198.html.
‘Abzock-Zeitschriften, Datenauswertung Teil 1: Methoden, Ländervergleich, Gesamtzahl » RELATIV EINFACH » SciLogs – Wissenschaftsblogs’. RELATIV EINFACH (blog), 22 July 2018. https://scilogs.spektrum.de/relativ-einfach/abzock-zeitschriften-den-daten-auf-der-spur/.
Raphael. ‘@ a_leininger , @ JaMoEberl : have times quick’n dirty the WASET publications crawled’. Tweet. @RaphaelWimmer (blog), 22 July 2018. https://twitter.com/RaphaelWimmer/status/1021094037909098497.
‘Software Citation Theme’. Generation R (blog). Accessed 26 July 2018. http://genr.eu/wp/category/themes/software_citation/.
‘Trust in Science Would Be Improved by Study Pre-Registration | Science | The Guardian’. Accessed 26 July 2018. https://www.theguardian.com/science/blog/2013/jun/05/trust-in-science-study-pre-registration.
Ross-Hellauer, Tony. ‘What Is Open Peer Review? A Systematic Review’. F1000Research 6 (31 August 2017): 588. https://doi.org/10/gc5sjh.
Curry, Stephen. ‘Peer Review, Preprints and the Speed of Science’. The Guardian, 7 September 2015, sec. Science. http://www.theguardian.com/science/occams-corner/2015/sep/07/peer-review-preprints-speed-science-journals.
‘The “Fake Science Scandal” – an Opinion! «Labor Journal Journal’. Accessed 26 July 2018. https://www.laborjournal.de/blog/?p=9840.
‘Informatiker Klaus Tochtermann Über Fake Science – “Katastrophe Für Die Wissenschaft”’. Deutschlandfunk Kultur. Accessed 26 July 2018. https://www.deutschlandfunkkultur.de/informatiker-klaus-tochtermann-ueber-fake-science.1008.de.html?dram:article_id=423280.
‘Hypothesis’. Hypothesis (blog). Accessed 26 July 2018. https://web.hypothes.is/.
‘Publons’. Publons. Accessed 26 July 2018. http://publons.com/home/.
‘The Winnower | Open Scholarly Publishing’. Accessed 26 July 2018. https://thewinnower.com/.
Heimstädt, Maximilian, and Leonhard Dobusch. ‘Predatory Open Access Journals: Way Out Open Peer Review? | Journal of Media Studies’. Zfmedienwissenschaft (blog). Accessed 26 July 2018. https://www.zfmedienwissenschaft.de/online/open-media-studies-blog/predatory-open-access-journals-ausweg-open-peer-review.
Götzke, Manfred. ‘“Fake Science” – Was Läuft Falsch Beim Wissenschaftlichen Publizieren?’ Deutschlandfunk. Accessed 26 July 2018. https://www.deutschlandfunk.de/fake-science-was-laeuft-falsch-beim-wissenschaftlichen.680.de.html?dram:article_id=423395.
Freistetter, Florian. ‘Wissenschaftliche Publikationen Als Geschäftsmodell: Das Problem Bei “Fake Science”’. Astrodicticum Simplex, 23 July 2018. http://scienceblogs.de/astrodicticum-simplex/2018/07/23/wissenschaftliche-publikationen-als-geschaeftsmodell-das-problem-bei-fake-science/.
Citation format: The Chicago Manual of Style, 17th Edition
Heller, Lambert. ‘Beyond #FakeScience: How to Overcome Shallow Certainty in Scholarly Communication’, 2018. https://doi.org/10.25815/CJYS-6235.