Facebook PixelWhat are the biggest challenges to knowledge generation and dissemination in academia?
Brainstorming
Brainstorming
Create newCreate new
EverythingEverything
Sessions onlySessions only
Ideas onlyIdeas only
Brainstorming session

What are the biggest challenges to knowledge generation and dissemination in academia?

Image credit: I putu Balda / unsplash.com

Loading...
Darko Savic
Darko Savic Sep 25, 2020
Please leave the feedback on this session
Necessity

Is the problem still unsolved?

Conciseness

Is it concisely described?

What are the biggest challenges in the systems associated with knowledge generation?

Given that all academic work is an attempt to produce knowledge - yes, academia, I'm looking at you. Why are things in academia done the way they are? What if we could rethink everything from first principles standpoint and optimize for the desired outcomes?

Let this brainstorming session focus on defining the problems. We can come up with solutions in separate respective sessions. For example, compartmentalization and the inability to contextualize knowledge are some of the challenges. What else?
15
Creative contributions

Publish or Perish: integrity, reproducibility and collaboration

Loading...
Antonio Carusillo
Antonio Carusillo Sep 26, 2020
“ Evidence are the supposed to be evidence ? “ ( Rick and Morty) Currently science – and Accademia in particular – is experiencing a “reproducibility crisis”. In fact, in an issue released in 2016 in Nature (1) only between 40% and 10% of the studies investigated ( where the two extremes are represented by psychology and cancer research ) were found to be reproducible. Those are important numbers since – as you may guess – rise a big question mark on what is going on. Around 1500 scientists spanning different fields from biology to physics were interviewed regarding a possible “ reproducibility crisis “ in science and some of the problems highlighted were: - Selective reporting: meaning that you chose what to show and whatnot. - Pressure to publish: no papers> no ground for grants> no money> bye-bye lab - Poor statistics: no enough replicate ( underpowering ), too many replicates ( overpowering) or simply it was chose the statistical returning the desired result Those are the top 3 of a list of 12 major causes. There is even a specific voice for “fraud” but in my opinion “fraud” can apply across most of the other voices presented (2). The major cause of this – in most of the case – is the need to publish. The number of papers, the importance of the journal where they are published ( Impact factor ) and the citations is the way to weight the importance of a specific lab. Based on these is possible to apply for research grant meaning a certain amount of money that is given by stakeholders (public or private ) to keep on conducting research. That money is needed to pay: the people, the equipment and the cost for publication ( yes, after all the effort and money to write a research paper, you need to pay for publishing it when accepted and it is not cheap!). As a consequence, it is not hard to believe that even enlightened people as scientists may resolve to more shady strategies to produce the data that will grant them the money for the next 3 or 5 years. And this is true for both early-stage scientist and well-established researcher. The need to publish also has other consequences that still can affect reproducibility and dissemination in science. “ Gentlemen, there are no points for second place “ (Top Gun) To be accepted for publication from a big impact factor journal ( Nature, Science, Cell, PNAS) a paper has to be appealing and NOVEL. Novelty is a big thing. So, even if your research has yielded great results but they are no 100% novel a journal may refuse to publish it on this basis. The exception is when two or more papers – showing almost the same observations - are submitted almost at the same time from different labs and this is the case of the so-called “ back to back publication “ meaning that the 3 papers support each other giving more reliability and robustness to the results. So, not only you need to publish but you have to be the first one. This translates in: - Poor disclosure of preliminary data. This can be the case of poor collaboration between labs. Nobody wants to fully disclose preliminary data cause “ if they are faster than us they may publish first “ ( I bet you heard this sentence a million time ). This has also the unfortunate results that sometime a lab may not be able to fully understand some preliminary data on its own but because they cannot be shared they will root and decay in some lab journal while there could have been the chance of a big breakthrough if only the labs had collaborated. Sometimes young PhD students attending a conference are warned by their lab head “ be careful to who you speak, there could be competitors”. In jargon, they are also defined as “sharks”. No kidding! - Poor disclosure of protocols. Another reason for poor reproducibility in science has been accounted for partial or incomplete protocols. Sometimes people spend entire years to optimize a single step in a long protocol and once optimized it can make the difference between a properly working experiment and a poor one. This optimization step is like the secret ingredient of my grandma for her killer tomato sauce. It gives the upper hand. So why you should share outside of the lab? Let´s keep it “within the family “. It happened to me to speak during a coffee break with “random people” and get to know that they changed something in a protocol because “ that way works way better”. But guess what? Never disclosed in a paper. New studies are based on previous studies. And if you try to use the protocol from the previous study and it doesn’t work then you cannot reproduce. In some paper, people are even politely saying “ we failed to reproduce this data maybe because the reagents were not the same “. This kind of statement always makes me suspicious whether it was just a subtle way to say “ the data could not be reproduced and we do not know why “. “I’m An Ogre! You Know, ‘Grab Your Torch And Pitchforks!’ Doesn’t That Bother You?” – (Shrek) The last point is that Journals do not like Negative results. A negative result is most of the time considered a “ failed experiment “ an inability of the research to prove right what he/she was looking for. Besides the exception of one single journal, to my knowledge(3). Why publishing negative results is important? Cause it would save time – and money – to a lot of scientists. Another classic conference coffee break talk that happened quite often is “ Yes, we also tried that and it did not work “. Sometimes I have been told that even while having giving a presentation. So why I got to know it only then? What if I would have never got it to know and I would have kept on trying the same experiment over and over again maybe even blaming myself for being incompetent or even worst trying to “make it work”? Journals should have specific issues for negative results this would help: to save time to scientists and also prevent people to “fabricate” data since is already known that such thing doesn’t work so why out of the blue is working? “ Won´t somebody think of the children” (Simpson ) And all of this translates in a huge amount of stress and among the victims we can acknowledge the PhD students whose mental health is dramatically affected. Imagine you are in an environment where you are forced to publish at all cost. Imagine that as you start you are given the following speech “ the project is yours and you are the one that is responsible for it “. So you are brainwashed that the project is “ your baby “ and that whatever happens is your fault. You will be happy when it brings results and be demolished when it doesn’t work. In science, the second is likely to happen more often than the first one. So now you are conducting your project and the results won´t come. Your lab head needs these results too cause he has to publish. What happens? You are getting stressed more and more. Most of the time the lab head doesn’t stress you directly but you are prone to stress yourself cause you are aware that “they need” those results and if those results are not there yet is because of you. This has happened to me all the time. You can smell the disappointment in the air, you feel that the project is not going well and this makes you unhappy and at a certain point is hard to tell if you are unhappy because of the project or because you are unable to give to your lab head the results he/she needs. Long story short: PhD students experience mental burnout, breakdowns and develop suicidal tendencies (5). Oh yes, PhD students may commit suicide if the p-value is bigger than 0.05. Most of the universities have a shrink ( sorry for the term ) to take care of this. Publish or Perish, what else? References: 1- https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970 2- https://www.bmj.com/content/345/bmj.e6658 3- https://jnrbm.biomedcentral.com 4- https://www.nature.com/articles/d41586-019-03489-1 5- https://qz.com/547641/theres-an-awful-cost-to-getting-a-phd-that-no-one-talks-about/
Please leave the feedback on this
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarnia year ago
This "publish or perish" horror has its own consequences that in turn lead to poor quality of research. Authors need to constantly be on their toes and have to read potentially everything that is produced in their fields - this rarely happens since there is a lot to read. Since you don't read everything, you may think your idea is new. Also, the haste may divert you from finishing your experiments that could have made the concept full-proof. This leads to half-baked ideas being published and stale ideas being revisited. The urge to publish is so strong that the quality does not matter much. If you can wrap a half-baked concept using a super language (choice of words), the chances are good that it gets through. Publishing then becomes a matter of expertise in the presentation than expertise in the domain.
Please leave the feedback on this
Loading...
Subash Chapagain
Subash Chapagaina year ago
Genuinely original thoughts there. Talking about your suggestion of specialized journal editions with negative results, could we mould it into a new idea session?
Please leave the feedback on this
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni9 months ago
Subash Chapagain Antonio Carusillo It's like they heard the cry.

JOTE (Journal of Trial and Error) https://www.jtrialerror.com/ "publishes answers to the question “what went wrong?” in the form of short communications, as well as problematizing ‘the question of failure’, facilitating reflections and discussion on what failure means in research."

So basically they publish negative results. Moreover, the journal has full open access.

According to their manifesto (https://www.jtrialerror.com/the-manifesto-for-trial-and-error-in-science/), they intend to reduce the gap between what is researched and what is published, display a more faithful picture of science, and host a platform for views on replicability of results.
Please leave the feedback on this

Acute hierarchies and extreme political inclinations

Loading...
Subash Chapagain
Subash Chapagain Sep 25, 2020
Like most other enterprises, academic institutions too, are structured in a manner that manifests power hierarchies. Though such power play can be good for a corporate operation or a financial institution, it becomes a bit problematic inside academia. Ideally, universities and research institutions would do much better if there were no strict demarcations between the researchers and mentors, and if there was a very less administrative strain. However, in reality, we can often see that there is a sort of unhealthy power-play inside the laboratories and research groups. We all have heard of instances when some brilliant research idea had to be rejected because the PI did not like the ideator personally, (and we know how asymmetric the power of a PI is over his/her research assistants). Similarly, we have also heard when a brilliant research proposal gathered zero funding, all because the professor did not have a good relationship with the admin staffs that approve the finances. These kind of power ranks are counterproductive to the whole idea of producing knowledge in a free and open manner. The process of knowledge generation (inception, exploration and discovery of original ideas) suffers a lot because of unhealthy power exercise (both intentional and unintentional). Thes hierarchies exist in all strata: among the students (PhDs over Masters or Undergraduates), among the professors ( Professors over Associates and Assistants), between professors and students (PI over Graduate researchers), between administration and professors and researchers. Such asymmetry hinders the speed and efficacy of knowledge-generating systems. Another problem/challenge that has stifled academia throughout the history of modern education is the extreme political proclivity of researchers and scholars. The most visible (and apparently the most influential) of such chasm is the dichotomy of the schools of thought divided into the Chicago vs the Austrian camp when it comes to understanding the economics of the world. Though the proponents in each side of the political spectrum might have equally (in)valid positions and theoretical assertions, what it is doing to the system of knowledge generation is largely counterproductive. Similar extreme division is seen among the Marxists and the Capitalists, while the solution to the major problems of the world today might lie somewhere in the middle. However, by the virtue of strong political propensity of the proponents to their own respective 'schools of thoughts', coming to a consensus is almost always impossible which beats down the whole purpose of any scholarly pursuit. Read about the two schools of thoughts on economics and law: 1. https://comsf12.wordpress.com/2012/12/11/class-6/ 2. https://mises.org/library/chicago-school-versus-austrian-school 3. https://aeon.co/essays/how-the-frankfurt-school-diagnosed-the-ills-of-western-civilisation
Please leave the feedback on this

Sparse inter-disciplinary communication

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Sep 28, 2020
1. Academicians/ researchers seldom talk (about their work) to their counterparts from other fields of academia/ research. Their meetings and conferences have a specific (not at all diverse) audience. More often, they discuss, collaborate, and even cite their “friends” and ignore researchers of the opposite/ different view. 2. Researchers do not talk to people from allied and complementary professions. For example, there exists very little communication between biologists and medical professionals. Even in this case, they organize conferences for their kind. Breakthroughs in research take years to reach mainstream medicine. 3. Although pharmaceutical industries have good communication with the medical professionals (for reasons other than exchange and thereby enhancement of knowledge), they have little communication with researchers. Research in the industries is targeted, not open like in the academia (whether research in academia is “open” is another issue). Breakthroughs in research are often of little interest and sometimes contradictory to the sale/ agenda/ view of the pharmaceutical industries and they, therefore, avoid crossing paths. All of this ultimately degrades the quality of education imparted in schools. Education in schools lags behind the current progress in the field. I found a few references where even “research in education” is way ahead of the “practiced education”. This happens in almost all fields of education. This leads to education from schools being more or less useless when the student applies to industries. Industries, then, need to spend more resources on training the candidate.

[1]https://www.americanprogress.org/issues/education-k-12/reports/2018/06/20/452225/addressing-gap-education-research-practice/

[2]https://www.tandfonline.com/doi/abs/10.1080/13803610701640227?journalCode=nere20

Please leave the feedback on this

The student should drive the learning process

Loading...
Povilas S
Povilas S Mar 12, 2021
For the learning process to be truly efficient, the one who wants to learn something must drive it. The answer should not come before the question. The general situation in academia and other educational institutions is exactly vice versa. That kind of system is not giving the person the knowledge they truly want to have. Instead, they are offering what he/she has to learn to pass the exams, and ultimately get the degree. You either stay in and finish or drop out.

The institution drives the process, you are a player. Your free choice basically ends once you've chosen a particular study program to attain. It's like you've chosen to play a specific computer game, one's you've passed all the tasks in it, you get the reward. There's little room for free learning where you can ask questions, discuss things in a free manner, in short - spend time learning what you actually want to learn, not what you have to learn. There is some room for that, but it's not the core of such education systems and that is the problem.

So what should the alternative look like then? Pretty much like learning with the help of an online search engine. You start with a question, a topic you are interested in, a problem to solve, etc. It's user-driven, user-oriented. Just that having a live teacher proficient in a specific field is much better (for now at least, while the technology is still with flaws) and faster. You can ask specific questions and get answers or recommendations for proper learning material and so advance the learning process (which is relevant to you) further. And most importantly - you can have real-life practice with a proficient individual to help you. The teacher should be in no business to teach you something unless you want to learn.

The whole education system could be reorganized that way. It could become individual-oriented. It's hard to imagine this with the present situation, but it's possible. This should start with children's education. A teacher should help the child to explore his natural fields of interest, encourage, provide possibilities, tools for that, instead of pumping the knowledge which is allegedly necessary to get established in the society. Alternative education schools (like Waldorf and Montessori) are at least partly based on this principle. Self-sufficiency is a good strategy to teach, for bringing up free, creative individuals, - instead of how to fit in, you learn how to be self-reliant and independent on a material level. This is absolutely lacking in schools, cause the current agenda is otherwise.

Once the individual is mature enough to ask specific questions, focus on exploring specific topics in his/her natural fields of interest, they then can start to actively drive the learning process themselves.



Please leave the feedback on this
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni7 months ago
Agreed. Teachers and professors should be like pillars, simply to guide the students on their quest (whichever topic they choose) rather than dictate the path and the destiny (the knowledge that they think should be imparted to the students). For example, if a student wants to understand a theory that is not taught, they should be allowed to pursue it. They may seek help from the professors. If the professors can guide them, well and good. If not, they should be allowed to research on their own and understand stuff. Their evaluation can / should be then done by experts from the field (they need not be from the same institute).
Please leave the feedback on this
Loading...
Povilas S
Povilas S7 months ago
Shubhankar Kulkarni I'd say in an ideal case scenario external evaluation is not needed and is, in fact, another obstacle. It's part of the controlling process. Your skills after the learning will be your evaluation. If you are able to do/understand what you wanted then that's it, if not - you continue to learn. It's like in the case of bounty sessions here - if some expert were able to teach you what you wanted to learn, great, if not, maybe another bounty session, etc.

The difference, however, is that in the case of bounty sessions the one who wants to learn/understand, pays for it, but it would be great if such a process would be financed for young people. Now it's financed because the government expects to "grow" the professionals who then could give back to society and therefore to the government. Therefore the students ultimately have to play by the rules of the system. So it's not really giving. It's not oriented towards the individuals, it's oriented towards the government.
Please leave the feedback on this
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni7 months ago
Povilas S I understand and I support your point. Evaluation cannot be the measure of one's gain in understanding. For one, because, understanding is not similar or even comparable across individuals. The way I perceive something should not dictate how others perceive it, and also, be judged based on my perception. And for another major reason, that evaluation can be bypassed. I can memorize and score higher in a test than a person who has understood it but lacks on things like remembering the dates when the thing was invented/ stated/ written and by whom or even remembering the "classical" examples. On the other hand, if I learn something over the years and approach a company for a job, I should be able to get a chance to the interview and not be discarded only because I was not evaluated. My job should be based on things that I can do that how well that fits with the company profile. My graduation score should not matter.

However, evaluation is the closest proxy there is. With a large number of students graduating/ moving to the next class, "real evaluation", in the sense that you suggest, is impossible. Which corporate will spend hours conducting interviews of every candidate? Moreover, how many interviewers know what they really want from the candidate? They simply tick boxes (their objective expectations) and hire people who get the maximum ticks. Evaluation is a proxy they use as the first filter. Candidates not fulfilling those don't even appear for the interview. I am sure there are stats on how many good candidates a company loses just because of their primary filters. And I think the proportion will be significantly lesser than those candidates that were hired and were perfect for the job (understanding- and knowledge-wise).

Is there an easier way to showcase/ conduct "real evaluation"?
Please leave the feedback on this

Language barriers and the linguistic-cultural imperialism

Loading...
Subash Chapagain
Subash Chapagain Oct 08, 2020
One of the extrinsic problems that infect the mainstream systems of knowledge production is the limitation to propagation - and sometimes generation- of knowledge due to the hegemony of English as the uncalled lingua-franca of the academia.
Out of all the journals indexed in SCOPUS, the largest abstract and citation database of peer-reviewed literature from the science, technology, medicine and social sciences, 80% are published in English . Not just n peer-reviewed publication, the English language dominates scientific journalism worldwide. The widespread use of the English language might facilitate knowledge dissemination across national and cultural boundaries, but at times it acts as a negative regulator of the scientific discourse . Such hegemony of one language in science forcibly promotes the imposition of one particular cultural point-of-view over others , which beats the intrinsic objective of knowledge production.

By ignoring other languages (and languages are de facto the produce of a long cultural-historical background), the mass media of today's world undervalues the ownership of local communities in regard to traditional knowledge. Take for example the indigenous ways that the North-East Indians have been using bamboos for a variety of purposes for a long time. Though these people in the remote areas might not be directly involved in any sort of academic/scientific discourse (or publications in English) as defined by the modern systems, their knowledge is nevertheless very effective, pragmatic and dissemination-worthy. Similar examples can be found in other areas: ethnomedicine, arts and crafts, traditional engineering, to a name a few. In particular, I remember one instance back in my final undergraduate days where my research involved testing the high-altitude medicinal plants of Nepal against some clinically significant microorganisms. Since I had almost zero a priori knowledge about these plants and the ecosystems they thrive in, I had to seek help from a local guy who had all the information about the plants, where to find them, and how to hunt them. There was a problem though: I knew the plants only by their standard names, not the local ones. I might have done a lot of literature review and developed a perfect proposal for the research, without the traditional knowledge like that of our local guide, my research would not have even started. Hence, by dictating what gets wide in the open and what does not, linguistic barriers sometimes undermine the capacity of traditional systems of knowledge propagation and generation. Hence, the next stage in scientific communication should be towards incorporating a holistic method of producing and disseminating the knowledge by taking into account all the diverse cultures and languages.


[1]van Weijen, D. (2012). The Language of (Future) Scientific Communication. Research Trends, 31

[2]Tardy, C. (2004). “The role of English in scientific communication: Lingua Franca or Tyrannosaurus rex?,” in J. English Acad. Purp. 3, 247–269. doi: 10.1016/j.jeap.2003.10.001

[3]Alves, M. A., and Pozzebon, M. (2013). How to resist linguistic domination and promote knowledge diversity? Rev. Adm. Empres. 53, 629–633. doi: 10.1590/S0034-759020130610

Please leave the feedback on this
Loading...
A
Ana Suarez a year ago
Additionally, the point of view that gets encouraged with the English language generalization also stimulates research to promote civilized – barbaric populations' duality. Ethnographical works (I don't mean to generalize because they seem to be decreasing in number) often show non-English speakers as brutes.

You don't have to go to small communities to see this working. An example is that the United States call themselves 'Americans' excluding South and Central America from the range of 'civilization.'

This conception is pitifully mimicked by many people in Latin-American & Caribbean countries where their passive role in the civilization vs. barbaric opposition is reproduced and reinforced.
As crazy as it may seem, there is no word in English to refer to the United States population but "Americans".

Many schools in Humanities, Philosophy, and Social Studies have a "American tradition" or are named "The American School of X".

Being bilingual in developing countries entails a very privileged education. Hence, access to publications and production and dissemination of knowledge ends up in a few hands.

Please leave the feedback on this

“Funding”: A cry in unison

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Sep 28, 2020
Most advanced research tools are highly expensive. Moreover, there is a high recurring cost (or maintenance cost) to keep performing research. Therefore, a huge amount is spent to equip and maintain a research lab. Government funding programs receive a huge number of applications, and to be chosen, the number of publications matter. To run a lab, a professor has to have an active source of funding. The funds are usually allocated for an average of 5 years (starting from 2 years in some cases) depending upon the extent of the work to be performed. The professor needs to tap other funding sources that they can activate after the current funding expires. This leads to time and intellectual investment on the part of the professor to obtain funding – time and intellect that could have been better spent in acquiring knowledge. Israel and the Republic of Korea spend the most across all the countries on Research and Development, which is between 4.5% and 5% of their GDP. Other countries spend far less. The outcome is fewer academic institutes in the country. However, every professor can mentor up to eight (numbers may differ across countries) graduate students at a time. The number of Ph.D.s awarded per year in the US increased from about 31,000 in 1980 to about 54,500 in 2018, which is a 75% increase. However, the number of academic institutes increased from 3231 in 1980 to 4313 in 2017, which is a mere 33% increase. Therefore, only 20% of the Ph.D. graduates attain academic positions, and only 3% become professors. A large chunk of Ph.D. graduates, therefore, turn to industry, which has funding as well as an increasing amount of positions to fill, losing their dream of open research.

[1]https://www.investopedia.com/ask/answers/021715/what-country-spends-most-research-and-development.asp#:~:text=Israel%20and%20South%20Korea%20are,the%20Unesco%20Institute%20for%20Statistics.

[2]https://ncses.nsf.gov/pubs/nsf20301/data-tables/

[3]https://www.statista.com/statistics/240833/higher-education-institutions-in-the-us-by-type/

[4]https://www.theatlantic.com/business/archive/2013/02/how-many-phds-actually-get-to-become-college-professors/273434/

[5]https://smartsciencecareer.com/become-a-professor/

Please leave the feedback on this

Corruption in hiring

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Oct 30, 2020
The following things lead to the degradation of the quality of teachers/ professors:
  1. Nepotism
  2. Charging "fees" for the posts. Those who can afford the fees, get the job. some also take cuts from the salary in perpetuity. This is another form of selling the posts. However, rather than accepting the fees beforehand, they are deducted from the employee's salary. They also stage the interviews - ask irrational questions to undesired candidates (who probably are well-suited for the post) and easy questions to the desired ones.
Other vested interests anyone knows/ has experienced?
Please leave the feedback on this

Intellectual Property: to share or not to share?

Loading...
Y
You_Know_Who Nov 29, 2020
Intellectual property (IP) refers to the intangible creations of the human intellect. There are different types of intellectual property, among the most well-known there are copyrights, patents, trademarks, and trade secrets.
The concept of IP and the laws related to it was established with the main purpose of intellectual property law is to encourage the creation and the dissemination of goods generated by a person own creation. To this end, the law gives people and businesses property rights to the information and intellectual goods they create. If people can economically benefit from their creations, thanks to the rights endorsed by the IP laws, they will be more prone to share their ideas. Thus, the economic benefits underplaying the development and share of an idea can stimulate and promote innovation and strongly contribute to the technological development of entire countries.

In Accademia, patents are the most common types of IP. In fact, in labs working in Applied Science, a research project may lead to the development of a new tool, a new drug or even a new pipeline that if it fits all the requirements to be defined “patentable” like:
  • A patent may be granted only for a tangible invention.
  • Be new (the legal term is ‘novel’), there shall not have been something like this already described anywhere: papers, abstract, video, record or so
  • Involve an inventive step for a standard patent. The invention must not be an obvious thing to do to someone with knowledge and experience in the technological field of the invention.
  • Involve an innovative step. This is particularly important when another patent is already in place, from you o another party, and you want to patent a new one which brings improvements compared to the previous patent.
  • Be useful.
  • Not have been secretly used by you or with your consent. ( there should not be something like this already described anywhere: papers, abstract, video, record or so )
When it comes to the economical aspect, the inventor or inventors have a share in the idea.
In an ideal world, the share is based on who did what and who did the most should get the highest reward or share. Unfortunately, it is not always the case. It may happen that although the researcher (PhD student or post-doc) did most of the job ( from experimental design to the execution itself) the head of the lab may claim him or herself as the major contributor using the “ I suggested the idea” and “ I found the money to pay the experiments” as excuses. Not only, sometimes more than one lab head or PI can be involved in this process and they may claim also part of the share under the excuse “ we give you suggestions”. So it can easily happen that the share of the main inventor drops down almost equivalent to who did absolutely anything to contribute to the invention with the majority of the share going to the person who obtained the money to finance the research but did very little to contribute to developing the invention itself. If on top of this, you add that the host University can claim up 70% of the share of an invention developed within its institutes or labs, it is quite understandable why a creative researcher may be turned off from sharing his/ her ideas with somebody. Some creators may prefer to keep them for themselves, better rusting that seeing the creation of their mind stolen by somebody who does have barely a clue about what it takes to invent what the researcher just created. Others prefer to patent the invention on their own and to sell them to a company. This is probably the best-case scenario cause at least the idea is not lost. However, the University can still claim rights on the patent to the point to sue the researcher cause the invention was created using the Univesity resources so the university has to have a share. So, this kind of dynamics in Accademia may be toxic and prevent not only the sharing of knowledge but even a technological improvement which may have the potential to solve world-scale problems.
The hard question is how we may solve this and ensure that researcher creativity and creation is acknowledged at its full?
Please leave the feedback on this

Standardize, standardize, standardize - the difference between positive and negative results

Loading...
Brett M.
Brett M. Nov 29, 2020
This posts sort of falls in line with components of the "publish or perish" topic brought forth by Antonio Carusillo above, and it could not be any more important right now with a large population of individuals beginning to lose faith in science. The issue with scientific studies and a significant problem in academia is the lack of standardization from one laboratory to another. Even when studies seem to share all of their variables and test parameters, there is still a lack of standardization by journals that impacts subsequent labs when they attempt to replicate experiments. The lack of standardization is reflected by several articles seemingly "begging" for standardized methods in order to accurately measure data to establish the hard and true result for many topics across a variety of fields [1,2,3,4]

In fact, there is evidence fortified in science that seemingly similar studies with slight differences in study parameters that introduce bias have resulted in significantly different outcomes, generating different interpretations of similar problems. This raises great concern for those in academia as well as the general public.

For instance, say a Doctoral student working in the first years of their program are attempting to replicate a study to expand and discover new knowledge, yet the study or studies have been slightly modified without the details provided in the article(s) to a standardized extent. The Doctoral student is likely to spend months or years trying to manipulate their study design to replicate the finding, and if unsuccessful, they have to circle back and attempt to find another problem or they risk failing their program. The problem here is two-fold:

A) the Ph.D. student may now have to extend their program which, depending on the program and location, can present significant financial hardship since many programs have specific funding periods. This places emphasis on picking up student grants and scholarships--but with the heavy competition, this can be quite difficult.

B) the non-significant results, as mentioned by A. Carusillo in their post, are placed back in the "file drawer", and thus presenting the file drawer problem. Briefly speaking, this is the concept that non-significant results
are stored away for no one but the laboratory that produced them to see. Non-significant results are just as important as significant results because if we have access to these non-significant results found by someone else in academia, it prevents another individual from wasting grant funding, resources, and their own precious time on a futile process.

In this context, if non-significant results were made tolerable, imagine the exceptional progress science could make...

Thus, standardization of study designs or at least standardizing the submission process to require each and every critical detail of the experiment should be common practice in academia. Although many journals strive for transparency, there are still pitfalls to this transparent process that can significantly affect subsequent studies trying to replicate their effect.

Non-significant results matter and should receive the same merit of publishability as significant results. Either that or laboratories need to be much more explicit in how they describe their experimental parameters, for the benefit of progressing science further instead of holding it back from its true potential.
Please leave the feedback on this
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarnia year ago
Rightly said - non-significant results and the standardization process, if published, will help a lot of students. Here (https://brainstorming.com/ideas/order-lab-reagents-using-study-protocols/47) is an idea that addresses and also solves the problem. If you read the comments, it has been suggested that along with the reagents, the protocols should also be made public. This will tackle the problem with non-reproducible experiments. We can add another database to it that lists all the standardization processes along with the details of the reagents and cell lines used. If someone wants to reproduce a certain experiment, which does not seem to be working in their lab, they can ask for the reagents and other materials from the original lab or buy the same materials.
Please leave the feedback on this
Loading...
Brett M.
Brett M.a year ago
Shubhankar Kulkarni Thanks for directing me here. This is exactly what needs to be done. I'm wondering, however, how open journals and their editors would be to adding these criteria to their review process; after all, acquiring reviewers can be a difficult process (personal experience). At the same time, I would hope that all journals would move in this direction, or else we face the possibility that some labs may want to submit to a handful of journals that do not require these criteria, thus allowing the authors to bypass this process and continue the cyclic issue presented here.

However, I think it be would be worth waiting out and navigating through a longer review process if it enhances the integrity of the publication process and the progression of science.
Please leave the feedback on this
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarnia year ago
Brett Melanson I agree we need norms that will be followed by most researchers to make this a success. I was thinking more on the lines of creating a parallel database (not via the publication process), something more social that will attract users to post their research and gain popularity based on it. This motivation may suffice for more researchers to sign up.

However, what you suggest (appending the reporting process to the publication process) ensures the reporting of standardization and negative results. The problem here is convincing journals to accommodate it in their review process.
Please leave the feedback on this

Work^2 & Journal classification

Loading...
Anja M
Anja M Nov 30, 2020
  1. If you happen to be a full-time PhD student and parallelly work you already know this is pretty close to, if not the same, as if you had two full-time jobs. The plot thickens if your work is already something on academia, a researching, or teaching position, whichever of the two. And I need to clarify this, in order for it to be more clear to everyone, due to our differences in educational systems: when I say: "work", I don't have in mind occasional helping hand PhD students usually have with master's or bachelor's students, but a real-deal full-time curricula and class preparation which you hold for the whole semester or a year, along with everything that comes with it: paper/presentation assessment, additional study help, literature choice, overall work you have besides the students, which is cooperation with your other colleagues on various non-student matters to maintain the normal and expected quality of the studies. Additionally, occasionally it happens that you also have to cover for your colleague's classes for some time due to their justified absence, which can really come as a burden, since you are probably not specialised in the field. So, if in the end you don't manage to complete your PhD in time, you are both frustrated because of that sheer fact, but also because you will lose your job at academia, since you have a limited time for that, according to the laws of the country you are in (like in my case). This is not necessarily a bad thing, since you can always, and still should, primarily focus on the quality of your thesis, as it is hopefully your base bigger entrance to the more serious job opportunities. However, if you are accepting these two equally in the same time of your life, be prepared to some serious balancing of responsibility, appreciation of what both of these bring, but also the overwhelming that comes with it all. And learn to snap out of feeling guilty and "underacheiving" quickly, or you will pave your way to more depressing states of mind that will ulitimately lead to procrastination and ~zero productivity in both domains. More seriously: depression or periods of depressive states of mind.
  2. I don't know if this is not the case in other countries, but where you publish your paper can often prove dissonant with your PhD-Academia and Work-Academia requirements and expectations. Of course, the top journals are the top journals, no doubt about it, but... There are still lists and lists of journal classifications that entail some and not the other ones, and it just seems like a wheel of fortune how often the validity of each of these lists will change in your country. Also, I won't even start about the reviewing process, which too frequently takes month, if not even a whole year to assess your paper, which is an additional drop in the anxiety pool when you just think about the possibility your paper can get to be rejected in the end. At least it's a possibility. If not, it depends what changes you'll have to administer, and how much time you will be given to do so. At the time your reviews arrive, you have probably forgotten all the subtlety of your paper because you have begun doing something else, and you have to get back in the game for it specifically. Not to mention all more serious research requires time, and that time is usually equivalently dissonant with what you are expected to produce on a yearly level. Sometimes I cannot put my finger on whether this is harder for natural/tech sciences or humanistic ones. But the rest of you wrote about this more extensively, so I won't prolongue on it. I will only once again stress the whole rat-race of adjusting the choice of journals with what you are working on, where in the end what is lost is your interest in the topic. Sometimes, more "outsider" ones appear to be of a much better quality and you can actually read about more fruitful stuff there, so you start wondering on what is actually the criteria for the national journal rankings for the academia workers based on. There appear to be much more less relevant criteria than most of us find acceptable, although I don't want to convey the picture of some total arbitrariness. So... Good luck to us all. :)
Please leave the feedback on this

Going open source

Loading...
Martina Pesce
Martina Pesce Nov 05, 2020
Considering the huge already mentioned funding problem, going open source for university would be very convenient. Many support it, but there really is not a united front on this. I believe that if all university would impose their will in this direction it would be impossible for the scientific journals to ignore it.
Please leave the feedback on this
Loading...
Martina Pesce
Martina Pescea year ago
sci-hub <3
Please leave the feedback on this
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarnia year ago
Agreed! There is a revolution on its way in the field of research publication. Open source is a major part of that. We know how https://sci-hub.do/ has been tackling this issue. Although they have faced wrath, more and more people are secretly supporting it. It is only a matter of time when it is officially recognized.
Please leave the feedback on this

"Sticking to the textbook/ curriculum" ideology

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Nov 09, 2020
The problem: It may be laziness on the part of the knowledge provider or their inherent lack of creativity that makes them stick to the recommended textbook. This happens more in the schools and to some extent in the institutes providing higher education. Textbooks were simply guidelines for the teachers that were used to check if the entire topic is covered and a helpline for the students. It has more or less become a holy book that lands you in the next grade. Even the exams are based on the facts given in the textbook and do not encourage out-of-the-box thinking.

The cause: Other than laziness and lack of creativity, there may be a few other reasons for this. In order to provide equal education opportunities, the national laws dictate that what is taught should be the same across the country. The intention of this law was pure when it started. However, it lost its vision over the years and there has been little change to it. Students can be difficult to teach and creativity can be both good and bad. Bad ideas cannot be appreciated. On the teacher's part, distinguishing between and good and bad ideas and explaining it to the students can be difficult. Eliminating creativity, in general, may seem a better solution, then, for the teachers. Another reason can be the ego - students should not think beyond the teacher's imagination capacity. Not all teachers appreciate it.

The effect: Creativity is curbed in early childhood where it is the most rampant. Students grow sticking to the textbook and become too lazy to think out of the box. The institutes that have recognized this issue and want to overcome it, either take extra efforts in providing the non-textbook education along with the textbook education or they deny the system entirely and start a parallel, nationally-unrecognized system of education. Both these alternatives only increase the inequality in the kind of education provided and destroy the entire purpose of having a textbook.

*The percent of teachers providing a high-quality education may vary across the countries.
Please leave the feedback on this

Teaching one-on-one or in small groups

Loading...
Povilas S
Povilas S Mar 13, 2021
The difference in the effectiveness of learning between being in a group and receiving personal teaching is huge. In most cases, it makes all the difference. This is especially true if the student is eager to learn and is truly interested in the subject matter. Those are some of the main disadvantages related to large audiences:
  • You may not be able to see or hear well enough if you are somewhere in the end or in the corner of the room.
  • There might be a lot of distraction coming from people who are not interested and temptation to chat with others even if you are.
  • There is rarely time for detailed questions and discussions and even if there is, one may be too shy to ask in front of other people.
  • Different people have very different levels of understanding/learning skills/interest and it's impossible to address all of those at once.
All of the above problems are eliminated if one is being taught individually. Even smaller groups are much better than large audiences. When I was in my master's studies we were a group of only 5 people compared to the audience of 20-30 people in the bachelor's studies. The difference was huge, I could follow the lectures much better and maintain consistent conspectuses, which I never did in the bachelor's studies. Also, to interact with the lecturers and clarify the details of the subject matter was much easier. More information would stick in my mind during the lectures and therefore there was less of a need to learn a lot at home.

It's certainly better to make shorter lectures with fewer people than long ones with a lot. People become less and less attentive to the same subject as time goes by. So dividing large groups into smaller ones and teaching the absolute essentials in shorter time periods to those smaller groups would take the same amount of time and people would memorize things better. Additional information can be learned independently.
Please leave the feedback on this

A blinded peer-review system can introduce bias that poses a threat to knowledge generation and dissemination

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Nov 06, 2020
A single-blind peer-review system is where the editors/ reviewers know who the authors are. A double-blind peer-review system is where the authors and the reviewers are blinded. The problems are:
  1. The reviewer bias: Revealing the identities of the authors contributes to prejudiced (against people from certain countries, newcomers, or certain groups) decisions (in a single-blind peer-review model). Reviewer bias also surfaces when they have strong opinions of how a particular experiment works/ result should be and they can reject papers showing contradictory (to their opinions) results.
  2. Blinding leads to a lack of accountability for the comments of the peer reviewers. Since the reviewers are blinded, their comments can never backfire, giving them a sense of immunity (corrupted power). "it (blinding) protects the vindictive, by concealing evidence of critical explanatory events and by hiding track records of bad behavior"
  3. The conflicts of interest remain undetected (since one or both the parties are blinded).
  4. Does the blind really work? - "The rate of failure of blinding in the trials was high: average failure rates ranged from 46% to 73% (although in 1 journal within one of the trials it was only 10%)." The efforts out in by the journals to conceal the identities of the authors and/ or the reviewers are highly variable. Moreover, successful blinding is tough in smaller scientific circles. When asked the authors to guess who their reviewers were, 11% made correct guesses according to one study and 25% to 50% in another study . Preprints make it easy to identify the authors of a particular paper.
  5. Even if blinding is full-proof, the experiments, the language of the manuscript, certain previously vocalized opinions, or more easily “our previous work has shown”-like suggestive comments (which are necessary and not unusual) may give away as to who the authors are.
  6. The editor bias: "With a small fraction (10%) of biased editors, the quality of accepted papers declines 11%, which indicates that effects of editorial biased behavior is worse than that of biased reviewers (7%). The triple-blinded review process (editors are blinded, too) are not feasible and highly uncommon.

[1]https://absolutelymaybe.plos.org/2017/10/31/the-fractured-logic-of-blinded-peer-review-in-journals/

[2]https://www.researchgate.net/publication/220416453_Single-_Versus_Double-Blind_Reviewing_An_Analysis_of_the_Literature

[3]https://www.atsjournals.org/doi/pdf/10.1164/rccm.201711-2257LE

[4]https://cacm.acm.org/magazines/2018/6/228027-effectiveness-of-anonymization-in-double-blind-review/fulltext

[5]https://springerplus.springeropen.com/articles/10.1186/s40064-016-2601-y

Please leave the feedback on this

Fallacies/ biases in research

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Dec 08, 2020
  1. Hypothesis myopia - Collecting/ citing only the evidence that supports the hypothesis and ignoring other reports/ explanations. A way to avoid hypothesis myopia is "devil's advocacy" - explicitly consider all alternative hypotheses and then test them head-to-head. Another method to avoid hypothesis myopia is pre-commitment. Display/ publish the data gathering and analysis techniques before starting the study.
  2. Texas sharpshooter - Collecting spurious patterns from the data and then mistaking them for an interesting finding of your research. Pre-commitment can be used to avoid this.
  3. Asymmetric attention - Giving more attention to unexpected outcomes than usual/ well-known/ expected outcomes.
  4. Just-so storytelling - Finding stories to rationalize whatever the results turn out to be.
Other ways to avoid incorporating biases in your research are collaborating with your rivals and blind data analysis.

[1]https://www.nature.com/news/how-scientists-fool-themselves-and-how-they-can-stop-1.18517

Please leave the feedback on this

Add your creative contribution

0 / 200

Added via the text editor

Sign up or

or

Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments