Facebook PixelAI-generated articles comprised only of quotations from scientific papers and based on what you want to convey
Brainstorming
Brainstorming
Create newCreate new
EverythingEverything
Sessions onlySessions only
Ideas onlyIdeas only
Idea

AI-generated articles comprised only of quotations from scientific papers and based on what you want to convey

Image credit: Flowwy

Loading...
Darko Savic
Darko Savic Sep 27, 2021
An AI-powered tool that takes a list of concepts you want to convey and generates an article solely composed of quotes from trustworthy sources. A reference to the original paper is linked for each excerpt.

An example

One would put together a bullet list of concepts they want to convey in an article:

  • a brief history of compulsory vaccination (200 words)
  • Pollen allergy prevalence in people who became adults before childhood vaccination became the norm
  • Pollen allergy prevalence in people who were born after it became the norm for children to receive 5 or more vaccines before the age of 10
  • A link between childhood vaccination and pollen allergy
With this information, the AI would search through databases of scientific papers and produce a concise article with relevant quotes.

Why?
  • Save a ton of work to convey your ideas.
  • Have AI do most of the work while you write the abstract/conclusion.
  • Use this as a learning tool by specifying the concepts you wish to understand.

Some features
  • When the AI finds multiple suitable quotes, offer them all and have the person select which to incorporate.
  • An option to GPT rewrite quotes in more concise words.
  • An option to expand the article with brief explanations of concepts/terms that are difficult to understand. Select a word or a sentence that needs to be better explained.
  • An option to rewrite the article based on the level of the reader's expertise in the subject matter.
  • What else?
I understand that this tool has the potential to be abused by taking quotations out of context to legitimize wrong ideas. It could be a confirmation bias trap for the tool user. So let's figure out a way to guard against misuse.
1
Creative contributions

Additional features: year of publication and authenticity filters

Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni Sep 28, 2021
I like the idea. More filters are necessary for high quality-low noise content. The user should be able to set a "year of publication" filter, if they want, for example, data from publications from the last 5 years or only between 1980 and 2000.

Authenticity filters for the papers can also be another feature. For example, the user should be able to decide to include data from papers published in journals with an impact factor greater than X or by authors with an H-index greater than X.

I agree that highly citated papers and high H-index authors can also make mistakes. However, publications in top-ranking journals are scrutinized and judged far more and faster than the others leading to retraction of the papers with fabricated results (or non-reprodicuble results) and maintaining the quality of the journals. Hence, filtering papers using such indices is not the worst idea. Also, the rate of retraction is less for such journals, so, most of the information contained in these papers is true at that time.
Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia19 days ago
That's actually not true. A lot of times, journals with a higher Impact Factor tend to have more retractions, as shown here (1). That is because, in order to reach those journals, people will do anything, including fabricating data. While some journals deal with this surprisingly well, like the New England Journal of Medicine, other journals like Science or Nature have a tendency to give a lot of preference and less scrutiny to "potentially high-impact" manuscripts because they care more about the impact than the quality of science.
Similarly, the H-index of some scientists was very high until it was found out that everything they had published was bullshit. So it's definitely not an accurate indicator.

However these are all problems with the academic publication world that are not completely relevant to the topic of this session. While I agree with the need of filters, I think that seeing only what high impact journals publish on a certain topic can limit one's view to only the most "trendy" approaches or ideas, thus narrowing the potential for new interpretations of a problem. What I would for sure filter out are all the predatory and low-reputation journals that severely damage the academic community.

References:
(1) https://www.researchgate.net/publication/231742591_Misconduct_accounts_for_the_majority_of_retracted_scientific_publications/figures
Loading...
Shubhankar Kulkarni
Shubhankar Kulkarni18 days ago
Manel Lladó Santaeularia I think you mean "retractions" and not "rejections". Thank you for sharing the reference though, it is insightful. I get your point.

The point that I was trying to make was this - "publications in top-ranking journals are scrutinized and judged far more and faster than the others leading to retraction of the papers with fabricated results (or non-reproducible results)"
Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia18 days ago
Shubhankar Kulkarni absolutely meant retractions. Edited my comment. About your point, that I definitely agree with

Add your creative contribution

0 / 200

Added via the text editor

Sign up or

or

Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments

Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia20 days ago
Hi Darko Savic! I fail to see what exactly this would achieve. How would you produce a concise article by simply quoting other papers? I guess you are talking about a review article that focuses on a topic and goes in detail into it. In that case, there should be a flow and an order to the explanation, and a linkage of the different concepts and the main results of the papers that are cited. I don't think an AI can do that satisfactorily. In the same way, papers don't only have one "quote" or "main concept", a lot of times they have more than one important finding, method or result that can be cited by other people. Writing a review article is definitely an art, and from my experience I can say that the best way of doing it is reading and really understanding the literature, going in-depth into it and then finding patterns, similarities, differences and interesting points. I find it difficult to imagine an AI having that kind of insight in the same way that an educated, experienced mind has. And I'm a big believer in AI but I hope that, at least in this, our mind is better!
Loading...
Darko Savic
Darko Savic20 days ago
Manel Lladó Santaeularia Pretty much everything I write is assisted by an AI algorithm that is less advanced than GPT3. It offers to reword my sentences to make them more concise. I often take it up on the offer. It conceptually "understands" what I'm trying to convey and often does a better job than I do.

If GPT3 can do that, imagine what GPT 4, 5, 6 will be able to do. It won't be long before we can fully outsource the literature search to AI. If this idea is on the edge of possibility at the moment, it becomes realistic very soon. It basically takes the current tools up a notch. Instead of offering better sentences, it would offer better paragraphs based on your outline of concepts you need to convey or understand.

I understand that this has the potential to be abused by taking quotations out of context to legitimize wrong ideas. A confirmation bias trap for the tool user. So let's figure out a way to guard against misuse.
Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia20 days ago
Darko Savic One thing is interpreting the grammar of what you're writing and advising corrections, a very different thing is putting thoughts and complex ideas in order to form meaningful, insightful summaries. If you look at it, most AI that write on their own, like the ones that write books or movie plots on their own, write a lot of things that don't really make that much sense. I imagine that in something as complex as the research/science field, where a lot of concepts are extremely deep and intricate, it would not be easy to generate an AI that can easily extrapolate and present the insight of different papers in an organic, easy-to-read way.
Loading...
Manel Lladó Santaeularia
Manel Lladó Santaeularia20 days ago
Darko Savic In addition, not all research publications are worth the same. Having a critical eye when reading other people's works is really important. That's because not all journals, or even all peer-reviewers have the same standards. Additionally, data fabrication exists. And a lot of times, those fabrications or mistakes are not detectable by software, but are detected by people with a deep knowledge on those fields who realize that things don't add up. The "Obokata scandal"(1) comes to mind as an important example. In this case, scientists claimed to be able to reprogram human cells into pluripotent stem cells by only changing the pH of the medium they were being grown in. This was published in Nature, after peer-reviewing. Soon after the publication of the two papers, other scientists started questioning the validity of those claims, the data and the writing. After internal investigation it was found out that most of those papers was bullshit and fabricated data. That could probably not have been found out by an AI.

The reason why I point this out is that, regardless of Journal, Impact Factor and such, it is important to be critical of the research one is reading and reviewing, and I believe an AI could easily include information that is untrue or not well peer-reviewed.

References.
(1) https://en.wikipedia.org/wiki/Haruko_Obokata
Loading...
Darko Savic
Darko Savic20 days ago
Manel Lladó Santaeularia If it was easy the idea wouldn't qualify as novel. It would already be done:)

A person would oversee what the AI puts together. As suggested in the "features" section, the person would have the ability to select options that make sense. The initial versions would be more of a human-directed tool than an auto-writer. In later versions, I can imagine the tool doing a better job than a person.