Have you considered deciding between odds and outs of a problem by placing a certain kind of bet?
Blaise Pascal had one such peculiar offer on an even more peculiar problem. He proposed a certain way of looking at one’s frame of mind towards the existence of God. Basically, humans bet their lives into the viewpoint that God either exists or not. ‘With their lives’: because there is nothing else they can put on the table since there is no conclusive proof to be gained for such a question in this life. But, precisely because proof cannot be found here and now, it ultimately has to be a ‘bet’, since it starts from and comes down to the point of uncertainty of our knowledge. However, before I continue, I would like to stress the point of this article is not a sheer explanation of this famous hypothesis. It has been thoroughly developed, discussed, and even disputed over the last three and a half centuries of its existence, even weighed to determine whether it is a fallacy or not. For the sake of the argument, I will mention only the necessary contra-arguments. Still, the point of introducing this argument at all is to see its scope of use far beyond its original purpose of discussing the approach to the question of God’s existence and our (dis)belief in it. My goal is to see how it can be used for the questions of assessing the high-risk concerns of the pros and cons like: the making of AI, use of nuclear energy, up to the decisions of vaccination support. So, in order to do this properly, bear with me with a slightly longer introduction.
Pascal claims since we cannot really discover whether God exists or not, it would be rational and pragmatic for us to believe so.
1) If God exists and we believe it – we get infinite reward (~epic win)
2) If God exists and we don’t believe it – infinite suffering (~epic fail)
3) If God doesn’t exist and we believe it – status quo (~fail, but no loss)
4) If God doesn’t exist and we don’t believe it – status quo (~win, but no meaning)
How this can be developed further concerning the same topic would require much more writing space and reading time, and it isn’t of primary relevance for us now. I will only mention two from the set of the most notable objections, due to their relevance for the examples to follow.
Psychological impossibility: There is something odd in making oneself believe a certain premise, even though it might lead to great consequences.
Moral impossibility: Believing in God/Following an idea just because we are expecting a certain personal gain is something to be argued against.
But someone could object to the last one, for example, and say: “Maybe that is wrong when it comes to the belief in God, but why should I feel bad if I want to pursue AI development and production? Here, the gain is not only personal, but benefits the whole humankind.” Partially, this is true, since the main difference lies in the empirically testable outcomes we deal with when it comes to the AI, and this is one of the practical outcomes we could use the Wager for. However, there are numerous scenarios in which AI production can prevail to either ‘yes’ or ‘no’ side. Professor Derek Leben questions this scenario in his article on the same topic. So, if we’d like to utilize this sort of reasoning for the matter, we must establish a threshold of minimum plausibility. This way, we are not dealing with anything crossing anyone’s mind, but real potential risks and gains.
So, if we conclude AI would have a high chance of entering into a conflict with human race upon its creation, which would result in our race being exterminated, that would present an infinite loss and it would be reasonable to conclude against the AI creation. In one of the opposite conceivable scenarios, perhaps humans would want to create AI in order to save their own race. In this case, our existence depends upon AI, which can either result in an infinite gain: shall we decide to create AI, or an infinite loss B, if we beforehand decide not to create it.
Still, another important factor to take into consideration with empirical situations is contingency. The above examples are just the ends of a spectrum. But what about many in-between cases? What if people decide upon creating an AI and it simply ends up helping with many different tasks, making our lives easier on a daily level? Then, that would fall in line with ’finite gains’. On the other hand, if we decide upon not creating, and we still can continue our lives without them depending on it, then we reach ’status quo’. ’Finite gains’ are definitely something of more value than ’status quo’, so in the process of reasoning with the hypothetical data given, Pascal would say it is rational for us to vote for creating AI! In principle, this is how the Wager works.
I started this brainstorming with myself upon noticing many issues in the COVID-19 crisis, about what I’ve started writing from another aspect. In this case, I asked myself if this reasoning can help with the problem of vaccine acceptance or wearing a face mask, for example. On that, see my contribution below. What I’d like to continue with you is brainstorming on more different topics tackling difficult, but important social situations. I am listening. :)