Facebook PixelThink of a new value metric for academic scientists
Brainstorming
Tour
Brainstorming
Create newCreate new
EverythingEverything
ChallengesChallenges
IdeasIdeas
Challenge

Think of a new value metric for academic scientists

Image credit: https://unsplash.com/photos/_dAnK9GJvdY

Loading...
jnikola
jnikola Sep 26, 2022
Please leave the feedback on this challenge
Necessity

Is the problem still unsolved?

Conciseness

Is it concisely described?

Bounty for the best solution

Provide a bounty for the best solution

Bounties attract serious brainpower to the challenge.

Currency *
Bitcoin
Who gets the Bounty *
Distribution
Scientists are valued mainly by the number of citations and journals they publish (a number of successfully finished projects is also sometimes taken into account when applying for a new one). With the increasing pressure from the best academic institutions and an increasing number of grants, scientists are struggling to publish and "stay alive". As a result, we see a greater distinction between journals, scientists and institutions (similar to rich people vs poor), a rising number of "open access" journals, variable content quality and questionable validity of the results .
  • How can we value scientists besides the number of published papers, their impact factors and citations?
  • Can we think of a new metric of a scientist, its work or a journal that is more meaningful?
  • Is there a system where scientists could focus more on science instead of on how to get more citations and funding?

Why we need it?
  • Many journals perished from the Internet, along with tons of papers being published there - it happened because some publishers stopped paying for the webpage hosting, went bankrupt or something else - nevertheless, thousands of papers went gone along with many scientists' citations
  • Nowadays many journals are just online available, putting in danger tons of work to simply perish by e.g. hackers attack
  • Value of scientists' work should be safer, more transparent and more comparable between the fields

[1]https://www.nature.com/articles/495433a

[2]https://www.nature.com/articles/d41586-020-02610-z

3
Creative contributions

Using reproducibility in building the metric

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Sep 28, 2022
Reproducibility is essential for ground-breaking and trivial studies alike. If studies are not reproducible, the results that they share are not true in general and may be dependent on a number of factors (model system, equipment, chemical reagents, etc.), further decreasing the value of the study.
Reproducibility can also solve the issue of contrasting studies (for example, one paper mentions that X upregulates Y, and the second mentions that Y upregulates X and not vice versa). Results of further studies can help approve/ disapprove either of the studies. Alternatively, further research could also provide additional insight. For example, both "X upregulates Y" and "Y upregulates X" might be true leading to a positive feedback loop.
Reproducibility can, therefore, prove whether the results of an older study still stand true. If they do not, points could be deducted from the older study. Therefore, a dynamic point system that can change in light of newer research should be installed.
How can we incorporate reproducibility in the metric?
One way is to simply multiply the existing score by the number of future studies that support the hypothesis. If the score of a study is 10 and in the next 5 years, 10 studies approve their results, the new score will be 100.
Caution: The future studies that approve the results of the previous study should not be from the same laboratory. They should come from different research groups for proper validation.
Please leave the feedback on this idea
Loading...
jnikola
jnikola2 years ago
Great idea! Reproducibility is really important and could possibly be implemented into the "hierarchical tree" model I proposed by requiring at least three papers confirming a fact for it to become a node. Until then, it's a "pre-node".
Regarding the dynamic point system, I think multiplying could be a bit too much. For example, a paper that proved that DNA is a double helix would have a score of a few million since many people proved it. I think these millions would not correctly reflect the effect of this paper on the current knowledge. It would help people live on the "old glory" and not to follow the latest trends. In other words, some people could have many incredible breakthroughs in the fields of cancer bioinformatics or some other specific field where they are maybe one of five labs doing this, but not receiving any multiplying because fewer people do research on the same topic (due to limited technology available, limited resources or the field is very "new"). It could introduce a great disbalance between fundamental and applied science.
The idea is great, but we need to discuss all the scenarios, so we don't introduce unexpected and unintended errors.
Please leave the feedback on this idea
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni2 years ago
J. Nikola Since you mentioned the DNA paper... it was a landmark study and most fields in medicine depend upon the findings therein. Therefore, it can be a "big node" (validated by many other studies) in the hierarchy tree. Similarly, even if there is groundbreaking research elsewhere, it will be taken up by most research groups from the field or allied fields and that will be a milestone study. But I agree with you that some fields are small and new compared to other fields (for example, the origin of life field that falls under biology, not chemistry or physics).
Therefore, how about normalizing the score to the papers published in the field? For example, since there are numerous papers published in the field of "DNA double helix" and most of them will validate the DNA double helix study, the number of validations will be normalized to the total number of studies in the field. Even in a new field, if the number of papers is 10, but nine of them have validated the first study, the score of the first study would be 0.9. Similarly, if 90% of the DNA double helix studies (or studies based thereon) have validated the initial study, that study will have a score of 0.9. This normalization would thereby assign equal importance to both papers.
What do you think?
Note: The paper that validates a study should not simply cite the study. They should show experiments that suggest that whatever the initial paper suggested is true.
Please leave the feedback on this idea
Loading...
jnikola
jnikola2 years ago
Shubhankar Kulkarni I agree DNA was a landmark study and it should be somehow marked as a "big" node. Regarding the second paragraph, I understand what you are proposing, but I am not so convinced it would result in fair scoring. As I understood, you would, in other words, distribute 1 (or 100%) of some default maximum score between all papers in the field based on how validated they are, right? I think we maybe went too far with validations and diminished the effect of the paper itself on the current knowledge. Papers should support each other but that should be done in the publishing process - the assessment of the validity of results. After all, that's an unavoidable part of every discussion - to comment on where your findings fit inside the field of research.
Please leave the feedback on this idea

Measuring the paper's impact on the hierarchically-ordered knowledge tree

Loading...
jnikola
jnikola Sep 27, 2022
Instead of (or along with) measuring how many times the paper was cited, we could measure the paper's impact on the hierarchically-ordered knowledge tree. In other words, we could measure what type of knowledge the paper delivers relative to the existing knowledge.
Why?
  • the way of measuring the impact of paper-delivered knowledge on science (how many new questions were created, answered or new connections added)
  • comparing the effects of papers from different fields (here the current impact factors lag)
  • creation of a unique scientific knowledge database that can be used for exploring and finding new questions to address (new projects)
  • good base for developing new scientist or journal effect metrics that can be used instead of impact factors or SCI

Imagine having a hierarchy tree describing scientific knowledge.
DESIGN OF THE TREE
At the top, there are some general terms describing scientific fields (biology, chemistry, physics, astronomy, etc). The level below the top is the terms describing subfields of the above-mentioned general fields. The lower we go, the more detailed and specific the terms become. In the bottom levels, we can find multi-word phrases describing very specific terms such as "X role oY X gene in Z", "effect of X asteroid on Y star", "solubility of X in Y-based solutions", "chromosomal aberrations linked to Parkinson's disease", etc.
CREATION OF THE TREE
Hierarchy tree would be created the same way as below mentioned databases. Proposed way is to create a list of keywords describing knowledge ranging from general to the most detailed (like GO terms) - the most important part of the algorithm! Each keyword would be extracted from a paper and thus, have a citation connected to it. One paper could have many keywords and could be cited on many terms inside a tree. Based on the keywords, field and the knowledge delivered, the hierarchytree of terms (nodes) would be reshaped.
EVALUATING THE EFFECT OF THE RESEARCH PAPER
Once the hierarchy tree/database is ready, fully curated and tested, papers would be evaluated based on the effect the have on the hierarchy tree. If the paper-extracted keywords fit into the tree and leave an effect, this effect would be measured and the effect would be quantified. Examples of the effect types and the evaluation can be seen below:
Type of the effect on the tree / Score
  • just a citation added to already existing term (node) / 1 point
  • already existing term (node) gets split into multiple nodes because the paper delivers new knowledge on the topic / 10 points
  • two or more terms (nodes) merge into one / 10 points
  • new connection between previously non-connected nodes gets added / 5 points
  • etc
Example:
The paper delivers new knowledge on "Resistance of E.coli bacteria to azitromicin" and the keywords extracted from the paper are "azitromicin", "reversible resistance to azitromicin", E.coli". The algorithm finds a match in the tree and splits the above-mentioned term (node) into two following nodes: "Reversible resistance of E.coli bacteria to azitromicin" and "Irreversible resistance of E.coli bacteria to azitromicin". 10 points Also gets added as a citation to 6 nodes. 6 points In total, 16 points.
Similar databases with similar designs: KEGG pathway database, GeneOnthology
Please leave the feedback on this idea
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni2 years ago
Great work!
How do we deal with contradicting papers? For example, one paper mentions that X upregulates Y, and the second mentions that Y upregulates X and not vice versa. How do we deal with such cases?
Similarly, if based on a study, two different nodes are created. Ten years down the line, based on results from another study, those two nodes get merged. In such a case, do we decrease the points for the earlier research? In a sense, are the points fixed when the study is incorporated into the tree or they can change based on newer developments?
Please leave the feedback on this idea
Loading...
jnikola
jnikola2 years ago
Shubhankar Kulkarni Interesting question. In my opinion, if a paper describes a contradictory statement to what is already known, it is a great sign that people are actively working on it. Since it's not the tree's task to question the validity, it needs to be handled in a different way. I would propose several scenarios: 1) the paper gets cited under the new node that opened (like you described it in a second paragraph), 2) the node gets reshaped in a way to describe that the change of expression can go two ways, or 3) the node gets a "flag" meaning it needs to be taken with caution and requires a professional opinion. The third option would then require hiring a reviewer(s) that would determine how to shape the nodes. However, with the development of the hierarchical tree of knowledge, new tools for tree searching would be developed, that could later be used by editors to assess the validity of the data. These contradicting problems could be solved during the publishing process and, if the data is valid, a new node would be created. If not valid, it would probably be stopped from publishing.
Please leave the feedback on this idea
Loading...
jnikola
jnikola2 years ago
Shubhankar Kulkarni Considering your second comment, I think the points should be fixed. They are here to help validation of papers in the process of publishing and to help measure the "worth" of the scientist's work. After 10 years of technological advancements, almost every scientific breakthrough will look like a child's game. Therefore, I think it's better to measure the effect of the paper/findings on current scientific knowledge, rather than dynamically change it. But I must admit it's an interesting question. In some scenarios, dynamically changing "scores" could help young people grow, and drive scientists to continuously follow the latest trends. This is a great topic for discussion!
Please leave the feedback on this idea

Apply a dichotomised metric

Loading...
Subash Chapagain
Subash Chapagain Sep 27, 2022
We can frame this problem in two related but slightly different ways. Academic scientists are not the same as academic science. So, when looking in to find alternative ways of evaluation (or metrics), we need to distinguish what we want to accomplish: are we aiming at measuring the ‘worth’ of a scientist as an individual, or of his/her work? A research paper, a book chapter, a conference presentation or any other form of scientific communication speaks volumes about the worth of any scientist, but they are not absolute measures for them as individuals. I believe that as students of science (and by this I mean to say even the professors and top-notch scientists are also students of science), the worth of an individual resides in things beyond his/her direct publications and niche-specific reputation. I think we should consider some other characteristics of individuals if we want to really reorganise academic scientific meritocracy:
  • How good the scientist is at mentoring early career researchers and research students?
This is of great importance. It is of little value if you have several Nature papers, yet you can not provide guidance and mentorship to the younger generation who look up to you. Science is a dynamic process that is based on the temporal transfer of knowledge. A scientist hence should be also judged by how well he can transfer scientific thinking, and how well he can inspire new people into science. S/he should lead by example and be available to provide feedback and research guidance to trainees and emerging scientists. This metric can be established by an anonymous system of rating. For any scientist, the members of his/her lab would provide ratings annually. Any PI who is a great mentor and guide will automatically be ranked higher than a PI who is just another supervisor.
  • How involved is the scientist in scientific communication and popularisation of science?
From Carl Sagan to Richard Feynman, these personalities not only engaged in the frontier science of their era, but they made the general public understand what was going on. In today’s world where misinformation is a plague, this is more than needed. Scientists who do podcasts, engage in conversations, and write popular science books about their domain of work should be assigned more worth than closet scientists who never bring out their science to the public.
Having said this, the solutions like hierarchical ranking based on the body of work hence would be a good metric to judge the work itself, and not the scientist’s individual worth. Such a mapped metric speaks more about that particular domain of knowledge, rather than the overall contribution of the individual. It would be wise to deploy two parallel systems of ranking: one for the work and the novelty (this can be applied even to a ‘group’ of scientists/collaborators/a lab) and the other for the individual scientists.
Please leave the feedback on this idea
Loading...
jnikola
jnikola2 years ago
Great contribution Subash Chapagain! You are very true about the distinction between the scientist's work or them as a person. In that context, my hierarchical tree ranking would be a good measure, as you said, of the scientist's work. Since it would be used to measure the effect of the paper on the knowledge tree, it could also be used to measure the importance of scientific knowledge delivered by the journal and hence, replace the impact factor or other journal metrics.
Second, everything you said about measuring the worth of a scientist is true. However, a person working in science doesn't necessarily need to be keen on speaking in public, popularizing science, organising workshops, writing books, etc. I think what you are referring to is a special case scenario.
If they do great science, they shouldn't be criticised for not talking about it or sharing it. Speaking about that, on the one hand, scientists should do science because of their passion for science. On the other hand, to get a project, you must be a good mentor, organize or lead a project, or workshop, or play an important role in politics in science. I must admit I still don't know what's best, but what I see as a way to keep the balance is to require "soft skills", popularization activities and other similar roles if scientists apply for leading or PI roles. But, if scientists apply to work on a project led by someone else, their leadership, communication, science popularization, writing and other skills shouldn't be taken with the same importance.
Considering everything mentioned, I would propose having two metrics - one for the "worth" of a scientist as a scientist (the effect on the knowledge tree mentioned in my contribution), and the other one for its "worth" as a "person in science" (including all things you mentioned).
Please leave the feedback on this idea

Add your creative contribution

0 / 200

Added via the text editor

Sign up or

or

Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments