"This post has been verified X times - see sources”.
This would be a far more advanced system to how Twitter now shows "this post is disputed", using the power of peer reviewers.
It would show the number of verifications by peer reviewers, whom these are with their credentials, and crucially would force them to cite their sources.
For unverifiable but not provably-false information it could also show further flags with the areas highlighted, like Wikipedia's "citation needed".
Thus it would also serve as a centralised repository of links where the relevant sources and contesting news would link back to that article.
This integration could be an external trusted service that sites can use, similarly to how having a “search with google” bar is becoming ubiquitous across many content sites.
At a high level this could simply be displayed as a bunch of “upvotes” or veracity points which you could expand to see who verified it and what sources they used (and similarly the "downvotes" or disputes, and “partially true- see note” or “could not find verification”).
Verifiers would include links and sources. The idea would be to take the scientific method to everything and challenge all media on the internet.
As it catches on it would eventually become an expected norm for journalist sites, social media sites, etc to have the accredited integration to verify externally the veracity of what they post and could even trigger a notification of they do not, just as it raises a red flag on browsers when you visit a website without an up-to-date security certificate.
In this way, all the information is out there, but in context and subject to heavy global scrutiny.
Thus no singular authority is being given the power to "censor" someone else, but rather information is widely open to scrutiny, making it a social norm to fact-check writers rather than taking any information as true or likely true by default, in its entirety and without context.
Discussions into nuance (rare on social media) and examination of finer details also gain affordance through this nitpicky mechanic.
The power behind this would be that then community peer reviewing (with enough peer-reviewing from enough known external sources it becomes evident that the information is not being verified by only one organisation and its affiliates with conflicting interests could approve it.
People could also verify the verifiers themselves. Ways to establish trust could be done through established verifiers and known trusted entities verifying each other, similarly to how certificate authorities verify websites are who they say they are through verifying their security certificate and are themselves rigorously verified to ensure that they themselves are legitimate and can make these verifications. In the case of our verifiers, their real identifying information would not be required but rather their track records in their reviews and also the information they share about themselves would be scrutinised.
Something that cannot be verified would not automatically be marked false but it would also not be marked true (hence “citation needed): the burden of proof to be taken seriously either as a verifier or an article writer would rest on the one making the claims. This is the incentive to prove oneself in order to get verified.
In this way, we establish crowd-sourced information peer-reviewed by others. People can also volunteer credentials shown by social media accounts they link and verifications from other known individuals, who also require sources. Similarly to decentralised identity, this would be decentralised verification of who you are - basically peer-reviewing but even the peer-reviewing mechanisms and reviewers can be scrutinized.
People would be staking their credibility or business' credibility on verifying articles or parts of articles just as they do when making claims on social media. For example, software companies managing secure data could verify whether the explanations around types of encryption are technically accurate. A legal firm could verify that a journalist’s description of certain laws is accurate or provide footnotes on exceptions or other vital context.
A medical practitioner or governing medical body could confirm the veracity of medical information, a journalist could scrutinize other journalists’ research, interviewing practices and corroborative practices in the article, etc.
They would be on record via their official accounts attesting this or having personally verified certain information for their own usage similarly to news agencies. The sources they use would also be required to be stated along with their verification.
If people dispute your veracity as a less trustworthy entity (very easy to identify as people checking the list of verifiers would see your name and be able to check out your verifying source links themselves) there would be consequences, as well if you post outright lies and fake sources, or potentially flag an outdated link or raise a question mark if your sources themselves or your identity could not be confirmed, in which case your veracity vote on these posts might not be counted.
Multiple disputes against you by verified peer reviewers and entities could put your vote on hold, get your veracity votes on articles reduced in worth or nullified entirely, or get you banned from the site, depending on the severity of the offence (for example, someone who uses poor sources repeatedly would end up having a vote which counts for less than the beginner weighting, whereas someone spreading deliberate misinformation would be banned and any of their existing votes marked down as “disproven” or losing weighting as applicable, with the relevant peer-reviewing against them also scrutinizable.
There could also be indicators of whether the site and time of post linked would be for an older version of the site and there have since been changes made (similarly to how Google caches changes and sometimes has to be manually refreshed on search entries).
Any claim in an article or claim regarding the veracity of the article by a reviewer, or review of a reviewer can all be evaluated by peer reviewers. “Cite your sources” will be the basis for all verifications whether peer-reviewing of articles or peer-reviewing of verifiers.
Thus, there is full transparency to keep people informed through bringing visibility and much-needed context so people can make their own informed decisions and also build up further linking research. The goal is to give everyone easy access to the ability to investigate each claim. This also would put strong pressure on all journalistic and other entities to be doubly careful of what they claim to be definitively true and ensure they can back it all up with verified sources and hard evidence.
Hence we have the collaboration of journalism, independent companies, individuals, researchers, accredited financial institutions, all of whom have different motivations and are not affiliated with each other but whose shared motivation is finding truth and maintaining credibility and who would therefore be incentivised, to be honest and able to prove their claims.
We could also include on people’s or entities’ profiles and in completing a peer review form, a “disclosure” section, where they can self-disclose any potential conflicts of interest, meaning that those who do not disclose conflicts of interest would be more heavily scrutinized when people seeking to identify the credibility of the reviewers may discover subsidiary companies promoting each other without disclosing these partnerships or perks.
An AI could even assist in raising flags where “citation/verification sources may be needed”, similarly to Wikipedia to further highlight unverified segments. Although Wikipedia does this manually through editors, the AI here would only be used for highlighting POSSIBLE concerns rather than attesting to their veracity (similarly to how Grammarly proposes suggestions but is not always correct). This is just for ease of detection, not for flagging veracity ratings - human verifiers would be required to assess whether this is truly the case.
For even more credibility this entire platform could have its codebase open-sourced and subject to public scrutiny as its own form of peer review similarly to how Signal is open-sourced on Github for its security practices to be scrutinized, to ensure no bias or smoke and mirrors is happening behind the scenes, and other important mechanisms are in place such as adequate protection against vulnerabilities or exploitation by unverified users.