Facebook PixelA Peer-reviewing system for Daily News
Create newCreate new

A Peer-reviewing system for Daily News

Antonio Carusillo
Antonio Carusillo Sep 29, 2020
Please leave the feedback on this idea

Is it original or innovative?


Is it feasible?


Is it targeting an unsolved problem?


Is it concisely described?

Bounty for the best solution

Provide a bounty for the best solution

Bounties attract serious brainpower to the challenge.

Currency *
Who gets the Bounty *
Nowadays, it is possible to have access to daily news from different devises: TV, smartphone, computer, tablets. So news does not come only in the form of a newspaper, but rather as a podcast or even a post on Facebook. The news can range from the current political elections in a certain country to the new iPhone model but also more sensible information regarding health for example. During the 2020 pandemic, we have witnessed an astonishing increase of the so-called fake-news: the virus plot, the 5G antenna and so forth. Never to mention the fake news during Trump´s press conference about the intravenous injection of bleaching to treat coronavirus infection! Some of those fake news exalted to a point that: 5Gs antenna was destroyed cause they were claimed to favour coronavirus infections, NO-MASK strikes were organised in the USA, UK and Germany protesting against the futility of the masks and the lockdown in general, rather considered as a way to increase the control of the government over people, so considered a clear violation of freedom. And if we consider also how the Anti-VAX wave has contributed to the rebound of viral infection like measles in the USA, we can have a clear idea about the power - and the danger - of fake news.
Unfortunately, even if a great deal of effort has been made to fight off those fake news ( even in a fun way via satyrical TV shows ), they can still reach out to the general public and have detrimental effects.
An idea to ameliorate this would be to develop, like for the Scientific Papers, a peer-review system because of which a piece of news cannot be reported until approved for publication. This way. experts of the field may review the news and access the genuineness of it. Of course, since there are too many sources and way to spread a new, this may be difficult to realise. In this case, an alternative would be, al least, to include for verified news a watermark like stating that the news being read have been peer-reviewed and the source has been confirmed. This way, it will be easier for a person to decide if the source of the news can be trusted or not. This way, hopefully, in the long term will reduce the impact of fake news in our daily life.

How would you propose to tackle this challenge?
Creative contributions

AI Fact checking

Kenneth Zackerbjörg
Kenneth Zackerbjörg May 18, 2021
I dont know how. But AI should know. :)
Please leave the feedback on this idea

Random "fact-checking" jury

jnikola Jan 21, 2022
The increasing use of social media is generating tons of information, which should be somehow checked. As stated in the paper, the professional fact-checking reviewers system is not scalable. It is also not reliable since the individuals can be easily manipulated. The AI could handle the amount of work, but could also be biased or hacked. Therefore, we need a scalable tool to verify the information.
The idea
Fact-checking jury.
How would it work
The users would register and verify that they are humans on the site. The site would randomly select reviewers for every article or post. With an invitation, the "jurors" would also get access to the "background knowledge" database. The database would gather all the articles and information on the topic in a shortened format, to allow a "juror" to be as objective as it gets. The "juror" would then decide if the information is legit and rate it on the scale, along with additional comments (if any). Reviews from all the "jurors" would result in a validity score that would appear next to the article.
Additional infromation
The articles could be reviewed again if necessary. The amount of reviewing articles per hour depends on the number of registered users. The number of users would be higher if there is a reward system. Therefore, the site could offer benefits such as per review rate or gold membership on the site that allows you to get only curated and checked news.
Please leave the feedback on this idea

Peer-reviewing articles (and other reviewers) not to censor information but to assess its veracity and further critical thinking discussions on it

salemandreus Jul 21, 2021
"This post has been verified X times - see sources”.
This would be a far more advanced system to how Twitter now shows "this post is disputed", using the power of peer reviewers.

It would show the number of verifications by peer reviewers, whom these are with their credentials, and crucially would force them to cite their sources.
For unverifiable but not provably-false information it could also show further flags with the areas highlighted, like Wikipedia's "citation needed".
Thus it would also serve as a centralised repository of links where the relevant sources and contesting news would link back to that article.

This integration could be an external trusted service that sites can use, similarly to how having a “search with google” bar is becoming ubiquitous across many content sites.

At a high level this could simply be displayed as a bunch of “upvotes” or veracity points which you could expand to see who verified it and what sources they used (and similarly the "downvotes" or disputes, and “partially true- see note” or “could not find verification”).
Verifiers would include links and sources. The idea would be to take the scientific method to everything and challenge all media on the internet.

As it catches on it would eventually become an expected norm for journalist sites, social media sites, etc to have the accredited integration to verify externally the veracity of what they post and could even trigger a notification of they do not, just as it raises a red flag on browsers when you visit a website without an up-to-date security certificate.

In this way, all the information is out there, but in context and subject to heavy global scrutiny.
Thus no singular authority is being given the power to "censor" someone else, but rather information is widely open to scrutiny, making it a social norm to fact-check writers rather than taking any information as true or likely true by default, in its entirety and without context.

Discussions into nuance (rare on social media) and examination of finer details also gain affordance through this nitpicky mechanic.

The power behind this would be that then community peer reviewing (with enough peer-reviewing from enough known external sources it becomes evident that the information is not being verified by only one organisation and its affiliates with conflicting interests could approve it.

People could also verify the verifiers themselves. Ways to establish trust could be done through established verifiers and known trusted entities verifying each other, similarly to how certificate authorities verify websites are who they say they are through verifying their security certificate and are themselves rigorously verified to ensure that they themselves are legitimate and can make these verifications. In the case of our verifiers, their real identifying information would not be required but rather their track records in their reviews and also the information they share about themselves would be scrutinised.

Something that cannot be verified would not automatically be marked false but it would also not be marked true (hence “citation needed): the burden of proof to be taken seriously either as a verifier or an article writer would rest on the one making the claims. This is the incentive to prove oneself in order to get verified.

In this way, we establish crowd-sourced information peer-reviewed by others. People can also volunteer credentials shown by social media accounts they link and verifications from other known individuals, who also require sources. Similarly to decentralised identity, this would be decentralised verification of who you are - basically peer-reviewing but even the peer-reviewing mechanisms and reviewers can be scrutinized.

People would be staking their credibility or business' credibility on verifying articles or parts of articles just as they do when making claims on social media. For example, software companies managing secure data could verify whether the explanations around types of encryption are technically accurate. A legal firm could verify that a journalist’s description of certain laws is accurate or provide footnotes on exceptions or other vital context.

A medical practitioner or governing medical body could confirm the veracity of medical information, a journalist could scrutinize other journalists’ research, interviewing practices and corroborative practices in the article, etc.
They would be on record via their official accounts attesting this or having personally verified certain information for their own usage similarly to news agencies. The sources they use would also be required to be stated along with their verification.

If people dispute your veracity as a less trustworthy entity (very easy to identify as people checking the list of verifiers would see your name and be able to check out your verifying source links themselves) there would be consequences, as well if you post outright lies and fake sources, or potentially flag an outdated link or raise a question mark if your sources themselves or your identity could not be confirmed, in which case your veracity vote on these posts might not be counted.

Multiple disputes against you by verified peer reviewers and entities could put your vote on hold, get your veracity votes on articles reduced in worth or nullified entirely, or get you banned from the site, depending on the severity of the offence (for example, someone who uses poor sources repeatedly would end up having a vote which counts for less than the beginner weighting, whereas someone spreading deliberate misinformation would be banned and any of their existing votes marked down as “disproven” or losing weighting as applicable, with the relevant peer-reviewing against them also scrutinizable.

There could also be indicators of whether the site and time of post linked would be for an older version of the site and there have since been changes made (similarly to how Google caches changes and sometimes has to be manually refreshed on search entries).

Any claim in an article or claim regarding the veracity of the article by a reviewer, or review of a reviewer can all be evaluated by peer reviewers. “Cite your sources” will be the basis for all verifications whether peer-reviewing of articles or peer-reviewing of verifiers.

Thus, there is full transparency to keep people informed through bringing visibility and much-needed context so people can make their own informed decisions and also build up further linking research. The goal is to give everyone easy access to the ability to investigate each claim. This also would put strong pressure on all journalistic and other entities to be doubly careful of what they claim to be definitively true and ensure they can back it all up with verified sources and hard evidence.

Hence we have the collaboration of journalism, independent companies, individuals, researchers, accredited financial institutions, all of whom have different motivations and are not affiliated with each other but whose shared motivation is finding truth and maintaining credibility and who would therefore be incentivised, to be honest and able to prove their claims.

We could also include on people’s or entities’ profiles and in completing a peer review form, a “disclosure” section, where they can self-disclose any potential conflicts of interest, meaning that those who do not disclose conflicts of interest would be more heavily scrutinized when people seeking to identify the credibility of the reviewers may discover subsidiary companies promoting each other without disclosing these partnerships or perks.

An AI could even assist in raising flags where “citation/verification sources may be needed”, similarly to Wikipedia to further highlight unverified segments. Although Wikipedia does this manually through editors, the AI here would only be used for highlighting POSSIBLE concerns rather than attesting to their veracity (similarly to how Grammarly proposes suggestions but is not always correct). This is just for ease of detection, not for flagging veracity ratings - human verifiers would be required to assess whether this is truly the case.

For even more credibility this entire platform could have its codebase open-sourced and subject to public scrutiny as its own form of peer review similarly to how Signal is open-sourced on Github for its security practices to be scrutinized, to ensure no bias or smoke and mirrors is happening behind the scenes, and other important mechanisms are in place such as adequate protection against vulnerabilities or exploitation by unverified users.
Please leave the feedback on this idea

The impact of the news on the society can be graded

Samuel Bello
Samuel Bello Sep 03, 2021
There is so much news out there and sometimes the effect of misinformation can be overlooked. It is also important to note that one of the reasons some platforms are rated highly is that they are the first ones to break out the news. For journalists, verifying news is also going to involve as much effort as learning the news, and they hardly get any rewards for the extra cost or effort.
If a system can be created to grade the impact of the news on the community, policies can be made so that any news that can have a large impact on society will have to be verified thoroughly before they are published. That way only the most important news is verified.
People who spread the news that can have a large impact on society via any medium can be prosecuted for misinforming the public as an extreme measure.
Please leave the feedback on this idea

Add your creative contribution

0 / 200

Added via the text editor

Sign up or


Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments

Samuel Bello
Samuel Bello3 years ago
Daily news sources have very strict policies against broadcasting news that is not from reliable sources. The policies can be so strict that a news source can be sanctioned for sharing accurate but sensitive news without considering the effects on society. One can say that a review system is already in place and it is more practical than a peer review system.

All of the examples cited are cases where the news do not seem to have originated from "daily news" sources. The misinformation usually comes from individual sources or small groups. The reason such news spread fast and can affect public opinion drastically is that they come from sources that are not regulated. The reason some people trust these news sources is that they are not regulated so governments and large corporations cannot keep secrets from leaking through those sources. If the news were to be regulated then the public will not trust them as much.

In some cases, the news can be true but unverifiable. Since the sources are usually individuals that do not have to verify their news, it will be difficult to impose a review system without making people feel like their freedom of speech is being affected.

I believe that instead of the scientific approach to verify the news, this idea https://brainstorming.com/short-video-explainers-to-bust-popular-conspiracy-theories/537 should be a more practical way to keep fake news in check.
Please leave the feedback on this idea
Darko Savic
Darko Savic3 years ago
Take a look at Daniel Schmachtenberger's The Conscilience project:
google doc description: https://docs.google.com/document/d/1gD30djiG8K5pi1lZF8-RfV9vaYtXz5232QDm9sdUKdU/edit
video interview where he talks about it - https://youtu.be/Z_wPQCU5O6w

It's a similar idea to what you are proposing here
Please leave the feedback on this idea
Subash Chapagain
Subash Chapagain4 years ago
Quite indeed, the spread of misinformation is the single most challenging task that the society is facing as of now. Every now and then, people succumb to fake news and conspiracy theories that distort their worldview and generate more chaos in the already divided world. Research has shown that on average, fake news spreads six times faster than does the truth, especially in platforms like Twitter and Facebook. Hence, the idea of peer-reviewing would greatly improve the quality of the news that lands on our devices. However, we have to be mindful of the challenges that such an idea might face. Firstly, the sheer quantity of news worldwide is simply overwhelming to be genuinely moderated and reviewed. Hence, unless the incentives are colossal and the investment proportionate, it is unlikely that such a solution will come into existence. Moreover, even if the likes of Bezos and Musk and Ambani decide to invest in such a peer-reviewed news platform, the problem of inherent bias (even the reviewers might have issues with each other which might affect the quality of their approval/disapproval of any given news) is always there. To tackle this problem, I would suggest that we make use of blockchain-like systems where the identity of the reviewers is not revealed and the merit of their review is scaled based on their scores of verity from previously verified reviews. This would be knowledge-intensive to start with, but would be fruitful once the platform becomes large-scale. Also, we have to be aware that nowadays the news doesn't necessarily come out from a proper news agency or a registered media house. A lot of people get news/new information from unsolicited Facebook posts and tweets of other fellow individuals. To review each and every person's social media would in itself be a daunting and a resource exhaustive task (no wonder why Facebook and twitter leave it up to their AI bots). The only solution to this problem could be to regulate these giant companies like Facebook and Google to recruit special human-only review systems to go through as much as news-like posts (they can obviously use algorithms to separate news-like vs non-news-like posts), and fact check them manually before giving permission to publish/post.
Please leave the feedback on this idea
Subash Chapagain
Subash Chapagain4 years ago
Another idea that could lie in these lines is that rather than establishing one single large-scale platform for news-reviewing, establishing localized systems for the same. For example, setting up local fact-checking ecosystems involving locally trained journalists and media experts that know the culture and history of their assigned locations. Such an assignment would help in maintaining the objective input of the review process and make the whole system more reliable and accurate. Though such fact-checking websites do exist at the national level (for example alt news in India), make them more localized would help the facts penetrate deeper into the community and hence counter the negative impacts of misinformation in a more organic manner.
Please leave the feedback on this idea
jnikola4 years ago
I am glad you started this topic and I think the idea is genius! As you already mentioned, peer-reviewing the news before publication is almost impossible. Not only it would be impossible because of an endless list of news portals, blogs, videos, and posts that just cannot be controlled, but also because it would allow even stronger control of the information flow and limit free speech. On the other hand, the alternative you offered seems like an interesting and viable solution! Nowadays, it is hard to know which news is verified, but I found some interesting pages that could help you develop this idea. Wikipedia has a dynamic list that collects many fake news websites that “ intentionally, but not necessarily solely, publish hoaxes and disinformation for purposes other than news satire” (https://en.wikipedia.org/wiki/List_of_fake_news_websites).
Please leave the feedback on this idea