Caption: ‘International System of Units’. The SI base units: Symbol, Name, Quantity. A ampere electric current. k kelvin temperature. s second time. m metre length. kg kilogram mass. cd candela luminous intensity. mol mole amount of substance. Wikipedia.

Karmen Condic-Jurkic looks at a reboot for academic knowledge infrastructures with the decentralized web as a thought catalyst for making science in new ways: to disseminate knowledge faster and more easily; and to call for a clean start on metrics for making better research by rewarding—best practice, scientific integrity, or to steer research to be socially relevant for example with experiments in tokenomics.

The academic system is in trouble—this has almost become a common knowledge. The problems include: peer-review, scientific misconduct, the reproducibility crisis, overproduction of PhD students, lowered admission criteria, ever decreasing funding, paywalled access, and large profits made by privately owned publishers of publicly funded research. These have been discussed on many platforms and even found their way to mainstream media. To avoid going into more details, I wholeheartedly recommend reading one of the best assessments of the current state in academia, in my opinion, by Edwards & Roy in their 2016 paper, in which they nail the core issue already in the title: “Academic Research in the 21st Century: Maintaining the Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition”. It is indeed a challenging task. Much of the manuscript is dedicated to the role of quantitative performance metrics, which we know by the names of: impact factors, h-index, citation, university ranking, etc. These metrics have evolved from being support tools for librarians to numbers that dominate every scientist’s life, as well as the institutions where they work. They state:

“Quantitative metrics are scholar centric and reward output, which is not necessarily the same as achieving a goal of socially relevant and impactful research outcomes. Scientific output as measured by cited work has doubled every 9 years since about World War II, producing ‘‘busier academics, shorter and less comprehensive papers’’, and a change in climate from ‘‘publish or perish’’ to ‘‘funding or famine’’.”

The authors further quote Goodheart’s law, which states that “…when a measure becomes a target, it ceases to be a good measure”. This is what has happened to academia to the detriment of both scientists and science itself. Reducing a complex human activity to a handful of numbers seems like a dangerous idea, although the simplicity of it is undeniably appealing.

Incentives were designed to maximize the key performance indicators (KPIs) instead of best practice and scientific integrity. However, perverse incentives are tightly coupled to another important problem—the lack of appropriate infrastructure to support easy and fast dissemination of knowledge and modern workflows. Free and open flow of information is crucial for any further advances in science and technology. At the same time, it has never been easier to publish and exchange information and data. Unfortunately, the mainstream scientific publishing model not only hinders access by placing research content behind  paywalls, but its format is outdated and provides little or no support for the needs of a modern researcher. Some people build virtual realities, while researchers keep downloading PDFs… To put it simply, access to academic research is limited for both humans and machines due to this obsolete publishing style. But even when access is granted, it provides limited information instead of a well-documented research process together with the relevant research data. Providing data together with a manuscript is a fairly recent feature and definitely still far from becoming the standard practice across disciplines. However, making data available is one thing, but data actually needs to be FAIR: Findable, Accessible, Interoperable, and Reusable. Satisfying these conditions is a bigger challenge, that is also tightly coupled with research data management or general lack of it. Although most institutions have guidelines and recommendations for data handling, implementation and adherence to these guidelines is often left to scientists’ own devices, literally and figuratively, which results with inconsistent practices and data losses. This problem has been exacerbated by the ever larger volumes of generated data. Finally, there is the problem of access to one’s own data due to frequent moves of researchers between institutions. Losing credentials and access to an institution’s infrastructure results in a haphazard way of storing important research data on personal hard drives or cloud accounts.

The situation is messy, but it’s by no means exclusive to academia. The entire world is struggling with getting incentives and data management right. We as a society generate an unprecedented amount of information that can be made available for use and analysis—from social networks, to sensors and IoT—measuring everything all the time. However, who has access and control over this data has become a burning issue, especially in the light of the Facebook scandal during the last US presidential election. The truth is, though, that we are still learning how to cope with such vast amount of information from a practical and legal perspective and there is no single right solution. Data ownership and what is an ethical use of data in different situations are open questions, and difficult ones at that. The attempts to address all these issues in academia and to increase transparency has resulted with what’s now referred to as the Open Science movement. This is an umbrella term that encompasses a number of different activities to improve both incentives and infrastructure. New journals and publishing models are emerging, the number of preprint servers and data repositories is rising, suggestions on how to improve peer-review as open-peer-review are in place, and a group of editors and publishers “…recognised the need to improve the ways in which the outputs of scholarly research are evaluated” issued a set of recommendations in The Declaration on Research Assessment (DORA). On a more general level, there is a movement for the decentralization of the Web. What does that mean? In the words of Ruben Verborgh in his excellent blog post about decentralized Web:

“It is a fundamental rethinking of the relation between data and applications, which—if done right—will accelerate creativity and innovation for the years to come. … Ultimately, decentralization is about choice: we will choose where we store our data, who we give access to which parts of that data, which services we want on top of it, and how we pay for those. Nowadays, we are instead forced to accept package deals we cannot customize.”

Initiatives such as SOLID by Tim Berners-Lee, or Maidsafe are currently developing the underlying infrastructure to support such approach to data and services. The movement has gained further momentum with the hype created around blockchain, or more generally, distributed ledger technology (DLT). Whether the hype is justified remains to be seen, but there is certainly plenty of reasons to be excited about this technology. There is a huge potential in it to rethink and redesign the ways we interact with the Web, with each other and the service providers in every walk of life, including science. Of course, there is a growing list of startups looking into bringing this technology to academia, ranging from specialized platforms to those aimed at managing the entire ecosystem. This technology is appealing not just for its transparency, but also its ability to incentivize desired behaviours via reward mechanism (tokenomics AKA privately issued currency). It is important to acknowledge that we do need some kind of metrics and incentives to assess and steer research in desired directions, but instead of sticking to a perverse set that clearly doesn’t work, we need to find a set which will result with a desired outcome. What is a desired outcome should be clear to all stakeholders well before implementing any new incentives or metrics. In this context, tokenomics provide an interesting tool to experiment with different incentives and find ones which provide an optimal behaviour. New possibilities to assess, reward and fund research can be tested and validated. However, there is a long road ahead as the technology itself needs to mature. Additionally, we also need to better understand which parts of scientific process should be placed on the blockchain, if any.

Decentralization and blockchain are closely related topics, but they should not be seen as synonyms. The reason why I became excited about decentralization was the promise of sustainability for the infrastructure for data storing and sharing. I stumbled upon two projects, DAT and IPFS, which provide protocols that enable p2p data sharing and content based addressing. These protocols could easily rely on the existing hardware provided by every institution for local data storage, including library facilities which are vastly underutilized compared to services such as Dropbox or Google Drive. In fact, I came to believe that university libraries and librarians are probably one of the most underestimated resources scientific community has at its disposal. Furthermore, I think that libraries could and should play a key role in knowledge dissemination and help us reinvent the ways we communicate, publish and archive science in 21st century. Using existing resources would remove the need for funding some new (centralized) infrastructure for this purpose. Single-point of failure of these centralized services is another strong and commonly used argument for decentralization. What I find even more compelling is the flexibility that comes with decentralization. Different researchers, different disciplines, different institutions and different countries obviously have different needs when it comes to data infrastructure and accompanying legal requirements. Decentralization should allow for a variety of solutions to accommodate these different needs, while still allowing communication between various actors and stakeholders when necessary. The rise of smart contracts should provide a technical framework to specify particular conditions for any given exchange of information and services, while the inherent immutability properties of blockchain could be useful for tracking the progress of research projects and individual contributions.

Whether the decentralized web becomes a thing or not is not necessarily the most important outcome in this story. I personally see all of what is happening now with decentralization movement and Open Science initiatives as an opportunity to rethink the values that are important to us, both as humans and scientists, and to create environments that will be (better) aligned with these values and real needs. Modular and flexible frameworks that will allow for diversity of needs and resources available seem like a way forward, but what will be the underlying technology is a less relevant question. I also think that the entire conversation should shift from repeating what is broken in the current system and on to how to fix it to be a more constructive framework—what it is that we would like to see in academia and how to achieve it with the tools that we have? Having a decentralized copy of the current system does not seem like a better solution. We know how to reward high citation numbers and lots of publications, but we are clueless about how to reward and stimulate good mentorship. Mentorship and knowledge transfer are the pillars of scientific community, but there are no metrics that judge the quality of these skills. The same goes for that inner drive and curiosity, which are still the main reasons why people become scientists, which is completely taken out of the picture. Nobody I know has become a scientist to brag about their h-index, but there are a number of those who eventually started to care about it. So, we need to start more conversations between researchers and infrastructure builders, funders, publishers, and librarians. Researchers should not shy away from stating clearly and loudly what they need and the infrastructure developers should listen, and vice versa (I often see a disconnect here and it’s easy to fall into this trap of thinking that developers know what researchers need, and researchers thinking that they are powerless and have no say in it all. Neither is true.) This new decentralized vision of the Web will require a lot of new protocols and tools to be built, a lot of testing and experimenting, creating and trying new consensus protocols, and who knows what else. Definitely a lot of effort and patience, but it seems that there’s no lack of enthusiasm, at least based on the individuals involved I had a chance to meet so far. I, for one, am excited and optimistic about what the future has to bring—which is not normally the state of my otherwise skeptical mind.


Tennant, Jonathan. ‘The State of The Art in Peer Review’. SocArXiv, 28 May 2018.

Camerer, Colin F., Anna Dreber, Felix Holzmeister, Teck-Hua Ho, Jürgen Huber, Magnus Johannesson, Michael Kirchler, et al. ‘Evaluating the Replicability of Social Science Experiments in Nature and Science between 2010 and 2015’. Nature Human Behaviour, 27 August 2018, 1.

Fazackerley, Anna. ‘Cut-Throat A-Level Season “Pushing Some Universities towards Insolvency”’. The Guardian, 28 August 2018, sec. Education.

Buranyi, Stephen. ‘Is the Staggeringly Profitable Business of Scientific Publishing Bad for Science?’ The Guardian, 27 June 2017, sec. Science.

Fazackerley, Anna. ‘Cut-Throat A-Level Season “Pushing Some Universities towards Insolvency”’. The Guardian, 28 August 2018, sec. Education.

Edwards, Marc A., and Siddhartha Roy. ‘Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition’. Environmental Engineering Science 34, no. 1 (1 January 2017): 51–61.

‘Goodhart’s Law’. Wikipedia, 29 July 2018.

‘The FAIR Data Principles | FORCE11’. Accessed 3 September 2018.

‘Read the Declaration – DORA’. Accessed 3 September 2018.

‘Tim Berners-Lee, Inventor of the Web, Plots a Radical Overhaul of His Creation | WIRED’. Accessed 3 September 2018.

‘Paradigm Shifts for the Decentralized Web’, 20 December 2017.

‘Solid’. Accessed 3 September 2018.

‘“I Was Devastated”: Tim Berners-Lee, the Man Who Created the World Wide Web, Has Some Regrets | Vanity Fair’. Accessed 3 September 2018.

‘Providing Privacy, Security and Freedom | MaidSafe’. Accessed 3 September 2018.

‘Open Science Ecosystem’. HackMD. Accessed 3 September 2018.

Mougayar, William. ‘Tokenomics — A Business Guide to Token Usage, Utility and Value’. Medium (blog), 10 June 2017.

Project, Dat. ‘Dat Project – A Distributed Data Community’. Dat Project. Accessed 3 September 2018.

‘IPFS Is the Distributed Web’. Accessed 3 September 2018.

‘Smart Contract’. Wikipedia, 14 July 2018.

DOI: 10.25815/gnpf-8v53

Citation format: The Chicago Manual of Style, 17th Edition

Condic-Jurkic, Karmen. ‘The Busiest Researchers Ever! The Decentralized Web & Ending the Culture of Misguided Metrics in Science’, 2018.