Facebook PixelHow rational is the concept of free will in this era of AI-powered data manipulation?
Brainstorming
Tour
Brainstorming
Create newCreate new
EverythingEverything
ChallengesChallenges
IdeasIdeas
Challenge

How rational is the concept of free will in this era of AI-powered data manipulation?

Image credit: https://unsplash.com/photos/HuE1cJo-x34

Loading...
Nitish
Nitish Oct 28, 2020
Please leave the feedback on this challenge
Necessity

Is the problem still unsolved?

Conciseness

Is it concisely described?

Bounty for the best solution

Provide a bounty for the best solution

Bounties attract serious brainpower to the challenge.

Currency *
Bitcoin
Who gets the Bounty *
Distribution
"Google is watching you" could be a proverb soon. For the last few years, I thought google is showing me the ads based on my past searches. But recently I found it scary and intriguing when I got offered the things I was only thinking about. I never searched or even talked about my thoughts with anyone. I was aware that IT companies are gathering our data, and they even know us better than we do ourselves.

Until now I didn't put much thought into how this data could be used against me. For example, we have seen manipulations of elections using social media platforms. Moreover, many people are using social media to spread hate and mistrust among communities. What happens when people fall into such echo-chambers of hate? Unfortunately, people are being fed with fake news and they believe them to be true.

What good is free will if it's so easily manipulated? Are we going to be governed by AI soon?

How do we get out of this?
4
Creative contributions

There are two sides to this

Loading...
Povilas S
Povilas S Oct 28, 2020
I understand your concern and it reminds me a lot of Yuval Noah, who expressed very similar concerns: If AI will perfect the art of pressing your biochemical buttons by giving precisely what you want it can also learn to manipulate your negative emotions just as well too, then you'd be just a puppet and could be manipulated in any way the power controlling that AI wants.

But there's another side to this which is, in my opinion, a very beautiful thing. The capability of AI to know you better than you know yourself might lead to something humanity never experienced before, but was always craving for. If this power would be used benevolently to help humans instead of manipulating them, you could have a beyond perfect personal assistant. It would always know precisely what to say to you to make you feel good (and it doesn't have to be in a deceiving kind of way but rather a truthful way), it would know what song or movie or anything to recommend for you at any given moment. It could basically make decisions for you if you wished for that and if not then you could make them yourself. But that kind of relationship could benefit you in a hard to imagine way. That is (at least to some extent) what we always look for in a partner, someone who would be as familiar to us as we are just would be able to give bits of advice for improvement we are not yet able to come up with ourselves.

So it's really all about the intention with which the technology is used. Technology is just a tool and we can shape that tool any way we want according to our intentions. Therefore we should be more careful of people's intentions than of the technology itself.
Please leave the feedback on this idea
Loading...
SA
Steven Agee4 years ago
You are correct; AI is just a tool. The analytics, data modeling and other machine learning and decision specifics are still controlled. As long as man is flawed, so will be ANY AI tool. It will be, at the end of the day, a reflection (in some ways) of those that create it.
Please leave the feedback on this idea
Loading...
jnikola
jnikola4 years ago
I love the "another side" you mentioned. I see the AI as an extension of our mind, which opens a lot of new doors. If I wrote my thoughts, they would sound exactly like yours. But as I read, some big questions emerged.

If you can't sue your mom, friends or the environment for raising and shaping you into a murderer, because at the end, you killed by your "free will", you will not be able to sue the AI for the same neither, right?

Parralel to that,

You willingly buy a smartphone and use it on a daily basis. You willingly open and browse through the oceans of data, allowing an AI to learn your habits, movements and predict your thoughts.

Would that predicted thought be yours if there wasn't an AI? Can we blame the algorithm here or is it just a scary projection of ourselves that we fear of? Is the AI dangerous because it will bring our thoughts to surface before they emerge?

Just some late night brainstorming here... :)

Please leave the feedback on this idea
Loading...
Darko Savic
Darko Savic4 years ago
Our tools are designed to extend our will. We tend to fear exponential technology because we don't trust the tool operators. All our tools would be in harmony with humanity's interests if we weren't such a competitive, game-theoretic species.

Could we use AI to help us shift from being a game-theoretic, competitive species to a mostly collaborative species? https://brainstorming.com/r/s2
Please leave the feedback on this idea

Transhumanism horizon

Loading...
Anja M
Anja M Oct 28, 2020
This is a fairly asked question, I think it has been periodically resurfacing, basically in a more direct form since the Descartes' times. However, to skip the whole determinism vs. free will debate and focus on this particular instance, I will mention two, not primarily connected, instances, reaching out to Nick Bostrom.

  1. If we discuss solely the AI and us relationship, in his book "Superintelligence: Paths, Dangers, Strategies" Bostrom states there would be a possibility that AI (from its initial stage of being created as an AGI, achieving the level of ASI (artificial super-intelligence) would in its learning curve reach the stage in which it would either on purpose or by accident come to destroy the humankind. This scenario is a few steps too far from your initial question, but in the longer run it is relevant to the topic, shall manipulation arise and we are to face this "control" dilemma more seriously. Also, much before this ASI time, there are numerous ways to be the victim of a classic human-to-human manipulation, so this proposed theoretical course is interrupted and stagnant (for better or worse) or we face some classical distopian scenario in which human narrow-mindedness on the manipulation issue clashes with a potentially as developed ASI. This fits more with what @Povilas S said.
  2. However, a more direct solution to this refers to entering a transhumanist era. And it is not primarily connected to tech development as such, although it includes it, but to all the human aspects on which we directly (unlike the AI employment, which would be an indirect approach) are enhanced. So, this would mean: intellectual, bodily capacities, sensory, emotional spectrums, self-control, mood, energy, etc. For a more clear understanding, please take a look at this transhumanist "manifesto". To complete this line of thoughts I will only extract a portion referring to our understanding of "greater values". Although your question refers primarily to the AI, AI as also an extension of our manifest of perfecting our lives, so in a way it also goes in here. Moreover, my point is that rather than focusing on AI so much, we can focus on many more primarily human aspects which will consequently influence our will and will-power, as well, but in a way possibly more intuitive and controllable than the talk on the AI solely. So, presupposing we nurture certain values in general, we contend that some of such values we can still possess at the present moment as a "seed" of thought, although they are to be developed and achieved sometime in the future. This falls under something called a "dispositional theory of values", developed by the philosopher David Lewis. From this manifesto as well: "According to Lewis’s theory, something is a value for you if and only if you would want to want it if you were perfectly acquainted with it and you were thinking and deliberating as clearly as possible about it. On this view, there may be values that we do not currently want, and that we do not even currently want to want, because we may not be perfectly acquainted with them or because we are not ideal deliberators. Some values pertaining to certain forms of posthuman existence may well be of this sort; they may be values for us now, and they may be so in virtue of our current dispositions, and yet we may not be able to fully appreciate them with our current limited deliberative capacities and our lack of the receptive faculties required for full acquaintance with them. This point is important because it shows that the transhumanist view that we ought to explore the realm of posthuman values does not entail that we should forego our current values. The posthuman values can be our current values, albeit ones that we have not yet clearly comprehended. Transhumanism does not require us to say that we should favor posthuman beings over human beings, but that the right way of favoring human beings is by enabling us to realize our ideals better and that some of our ideals may well be located outside the space of modes of being that are accessible to us with our current biological constitution." So I hope I brought to light just a bit of a shift on the same topic. :)
Please leave the feedback on this idea

Deepfakes are coming

Loading...
Darko Savic
Darko Savic Oct 29, 2020
Soon enough deepfakes will be a matter of recording yourself and using an app on your phone to change the character to whomever you want to impersonate. What will that do to the overall credibility of the info we get online?
Please leave the feedback on this idea
Loading...
Martina Pesce
Martina Pesce4 years ago
I believe this concept alone is very well worth a brainstorming session!

Please leave the feedback on this idea

AI making you buy things - is real?

Loading...
jnikola
jnikola Nov 09, 2020
Every time we go shopping and we pay using a credit or collect points using a loyalty card, our shopping customs become a piece of data somewhere. As you mentioned, sometimes you get offered something you only thought of. Weel, this could be the reason - it`s real.

Many retailers use advanced AI algorithms to track your shopping habits. One retailer recently stated that the holy grail of retail business is "to build up a profile of customers and suggest a product before they realize it is what they wanted".

There is an application called Ubamarket that allows you regular shopping functions - to pay, make lists, and scan products for ingredients and allergens, etc. But what it also does is "not only the obvious stuff, but it learns as it goes along and becomes anticipatory. It can start to build a picture of how likely you are to try a different brand, or to buy chocolate on a Saturday", the founder says.

A Berlin start-up called SO1 offers AI predicted products and claims "that nine times more people buy AI-suggested goods than those offered by traditional promotions, even when the discounts are 30% less".

The conclusion - it`s real. Sometimes you buy things because AI 1) predicts you would like to buy it, 2) suggests it, and then 3) you decide to buy it. What seems ok is the fact that you would, most probably, come up with an idea of buying it by yourself, too.


[1]https://www.bbc.com/news/technology-54522442

Please leave the feedback on this idea

Add your creative contribution

0 / 200

Added via the text editor

Sign up or

or

Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments

Loading...
Darko Savic
Darko Savic4 years ago
Some of this is already being discussed here https://brainstorming.com/r/s136 and here https://brainstorming.com/r/s180

This topic seems to be on everyone's mind lately:)
Please leave the feedback on this idea
Loading...
Povilas S
Povilas S4 years ago
Darko Savic Although this session touches a bit different and important aspect of actually manipulating people with a help of AI and collected personal data whether those two were more about personal data protection and safe internet.
Please leave the feedback on this idea