Facebook PixelThe personal moral support bot
Create newCreate new

The personal moral support bot

Image credit: Photo by Tofros.com from Pexels

Contrived _voice
Contrived _voice Jan 27, 2022
Please leave the feedback on this idea

Is it original or innovative?


Is it feasible?


Is it targeting an unsolved problem?


Is it concisely described?

Bounty for the best solution

Provide a bounty for the best solution

Bounties attract serious brainpower to the challenge.

Currency *
Who gets the Bounty *
An AI that keeps track of your personal mental well-being via your online activities. It also keeps track of your personal accomplishments and whenever you start on a downward spiral of self-destruction it offers a customized word of encouragement based on your personality. Not everyone likes the direct "give it your best shot" approach. Think of it like like a co-driver in the journey of life.
There's a line in "Vienna" by Billy Joel. "You know when you're wrong but you don't always know when you're right." There is some scientific backing to that line. Egocentric bias. Studies on the matter show that people with depressive tendencies also display higher rates of negative egocentric bias. The lowered sense of self causes them to perceive most of the blame as their own while devaluing their own contribution to the project. The result is them withdrawing from the project further worsening the progression of the project. This then reinforces the belief that they're not contributing starting a downward spiral that either ends in a panic attack or a mental breakdown.
It would be beneficial if they could be reminded that they're doing fine whenever they start doubting themselves. Cutting the problem off as it emerges, this would keep them on the task and stop their self-destruction.
How it works
Humans are creatures of habit therefore it is possible to create an AI that can learn your behavioral patterns. The AI tracks your online presense when you're happy and when you're at your lowest. The A.I then maps the data to an emotional spectrum. It then subtly tries to keep you from getting bellow a certain point it deems healthy. It does this by recommending back to you your own positive interests.
The AI is also linked to your work schedule and keeps track of your accomplishments. Whenever you start to falter and question your contributions it chooses a similar situation in the past as and refers it back to you, turning you into your own exemplar.
This makes you think along the lines of " I've done this once, surely I can do it again" I feel like this method could do more good to high functioning individuals who still struggle with finding worth in their work and their own worth.
Artificial dialogue isn't advanced enough to mirror human conversation exactly. It could be passible for the entire system to share the same neural network. Allowing it to learn from each of it's individual users as well as pair users with similar behavioral patterns together in a form similar to group therapy. Having people to talk to with a shared connection is always good for people's mental well being.
Creative contributions
Know someone who can contribute to this idea? Share it with them on , , or

Add your creative contribution

0 / 200

Added via the text editor

Sign up or


Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments

Danny Weir
Danny Weir2 years ago
Great idea! This would absolutely work wonders for those in need of a little mental health boost. There are several apps that give a daily motivation/ mantra, but the idea of having a system that could monitor your mood and offer tailored encouragement sounds great. I would suggest that the phrase/word/piece of encouragement could come in several formats that could be chosen by the user:
  • Memes
  • Written words and phrases
  • Quotes
  • Song lyrics (or even sngs if connected to Spotify?)
  • Poems
  • Spoken encouragement
  • Pictures (maybe of loved ones or good experiences)
The user is then able to adjust what works best for them, or the AI could "learn it".
Please leave the feedback on this idea
Contrived _voice
Contrived _voice2 years ago
Danny Weir Yes. You get the idea. The AI could pick up on what works for you and then build on that. The application could be endless. Think of people with post-traumatic stress disorder, The AI could identify the triggers then help the individual avoid them. Similarly, it could be used to ease them off the effects of the trauma by giving them subtle nudges to help them overcome them. Think veterans, victims of abuse and many other groups of people who are forced to live as prisoners of themselves.
What if it could be connected to your Fitbit or any other device that measures your heart rate, It could use past experiences to help identify causes of stress and in the future walk you through managing the problem before it becomes massive enough to worry about. With enough users, you could essentially map out the entire emotional spectrum and transfer insights on personal development to whoever is in need of it the most. Forget therapy. You could do it all on your own
I do worry however that someone would find a way to take advantage of such a system for their own profit, Say advertising you products that you are most likely to buy when you're most unguarded.
Please leave the feedback on this idea
Spook Louw
Spook Louw2 years ago
Contrived _voice I was about to say the same, while I'm sure that this is the future, there are still some major ethical hurdles that need to be overcome. A simple search online for "the ethics of A.I" will provide countless interesting discussions and lectures on the subject (with as many differences of opinion, of course), this one is one I listened to recently which does a nice job of explaining exactly how bias is formed within A.I and what that means for programs like the one you are suggesting.
Please leave the feedback on this idea
Contrived _voice
Contrived _voice2 years ago
Spook Louw Interesting podcast, great listen. ethics are heavy. I liked the thought of viewing AI's as agents rather than entities. It's a great angle. It's a reminder that although the networks are better at logical operations, they still have limits. With time I think we will find a way to power through the muddy logistics of what qualifies as a sentient being, that's just my hope though. I think everything new has to go through this process
Please leave the feedback on this idea