Social media bot that reduces people's posts to principles and/or high concept summaries. It considers both what people say and what they imply. It submits the summary as a comment under the original post.
Save readers' time by translating peoples' posts to high signal, low noise.
The agenda behind peoples' words isn't always obvious. The bot brings this aspect to light.
Help people put their thoughts into more efficient words.
Make people more honest on social media. You never know when the bot can show up and expose the agenda behind your words.
Make people more conscious of the fact that their inefficient writing might be wasting other people's time.
A tool for checking your own posts to see if you have expressed yourself clearly.
The bot takes the original poster's words and tries to reduce their meaning to the underlying principles or concepts. At times this could be useful, at other times it could be funny.
In places such as Reddit, the bot could just go around and comment on people's posts without being asked to do so. On other platforms such as Twitter, the bot would only participate when someone asks it to.
A successor of GPT3 several generations down the line will likely be required for this software to work well. It would probably work with GPT3 right now, but poorly.
In addition, a deep learning algorithm would comb through hundreds of thousands of examples alongside a big team of psychologists/sociologists.
Experts would review endless streams of posts from Twitter, Facebook, and Reddit. The team members would pick a random post and reduce it to principles, concepts, or otherwise summarize it to reflect what the original poster was trying to convey. After one reviewer was done, the 2nd would look at the same post and write their own thoughts. Then the 3rd.
A fourth reviewer would compare all 3 reviews with the original post and decide whether they have a consensus. If they do, the deep learning algo is given the data to learn from. If there is no consensus, the data is also provided to the algo but shown as an example of what NOT to do.
But, but.. how does the bot know what the original poster really meant?
The bot knows what other readers know. In other words, if someone wasn't clear enough for the bot, they weren't clear enough for other people as well. If the algorithm is not (yet) good enough to understand what people do, it should be upgraded. If the message was not clear enough, the original poster needs to set people straight. The bot interpreting the OP's words creates an opportunity for the OP to elaborate on what/how they really meant.
Please leave the feedback on this idea
Please leave the feedback on this idea