If we discuss solely the AI and us relationship, in his book "Superintelligence: Paths, Dangers, Strategies" Bostrom states there would be a possibility that AI (from its initial stage of being created as an AGI, achieving the level of ASI (artificial super-intelligence) would in its learning curve reach the stage in which it would either on purpose or by accident come to destroy the humankind. This scenario is a few steps too far from your initial question, but in the longer run it is relevant to the topic, shall manipulation arise and we are to face this "control" dilemma more seriously. Also, much before this ASI time, there are numerous ways to be the victim of a classic human-to-human manipulation, so this proposed theoretical course is interrupted and stagnant (for better or worse) or we face some classical distopian scenario in which human narrow-mindedness on the manipulation issue clashes with a potentially as developed ASI. This fits more with what @Povilas S said.
However, a more direct solution to this refers to entering a transhumanist era. And it is not primarily connected to tech development as such, although it includes it, but to all the human aspects on which we directly (unlike the AI employment, which would be an indirect approach) are enhanced. So, this would mean: intellectual, bodily capacities, sensory, emotional spectrums, self-control, mood, energy, etc. For a more clear understanding, please take a look at this transhumanist "manifesto". To complete this line of thoughts I will only extract a portion referring to our understanding of "greater values". Although your question refers primarily to the AI, AI as also an extension of our manifest of perfecting our lives, so in a way it also goes in here. Moreover, my point is that rather than focusing on AI so much, we can focus on many more primarily human aspects which will consequently influence our will and will-power, as well, but in a way possibly more intuitive and controllable than the talk on the AI solely. So, presupposing we nurture certain values in general, we contend that some of such values we can still possess at the present moment as a "seed" of thought, although they are to be developed and achieved sometime in the future. This falls under something called a "dispositional theory of values", developed by the philosopher David Lewis. From this manifesto as well:
"According to Lewis’s theory, something is a value for you if and only if you would want to want it if you were perfectly acquainted with it and you were thinking and deliberating as clearly as possible about it. On this view, there may be values that we do not currently want, and that we do not even currently want to want, because we may not be perfectly acquainted with them or because we are not ideal deliberators. Some values pertaining to certain forms of posthuman existence may well be of this sort; they may be values for us now, and they may be so in virtue of our current dispositions, and yet we may not be able to fully appreciate them with our current limited deliberative capacities and our lack of the receptive faculties required for full acquaintance with them. This point is important because it shows that the transhumanist view that we ought to explore the realm of posthuman values does not entail that we should forego our current values. The posthuman values can be our current values, albeit ones that we have not yet clearly comprehended. Transhumanism does not require us to say that we should favor posthuman beings over human beings, but that the right way of favoring human beings is by enabling us to realize our ideals better and that some of our ideals may well be located outside the space of modes of being that are accessible to us with our current biological constitution."
So I hope I brought to light just a bit of a shift on the same topic. :)
Please leave the feedback on this idea
Please leave the feedback on this idea
Please leave the feedback on this idea