Facebook PixelHow can we make AI comprehend Human Ethics ?
Brainstorming
Brainstorming
Create newCreate new
EverythingEverything
Sessions onlySessions only
Ideas onlyIdeas only
Brainstorming session

How can we make AI comprehend Human Ethics ?

Image credit: Illustration: erhui1979/iStock

Loading...
Mohammad Shazaib
Mohammad Shazaib Aug 25, 2020
Please leave the feedback on this session
Necessity

Is the problem still unsolved?

Conciseness

Is it concisely described?

Who gets to manipulate new technology is at the heart of our time. Should civilization permit technological superiority of scientific exuberance to determine who makes choices that affect all humanity in a dramatic way?

Society may leap into morality-defining innovation without the analog of a constitutional convention to choose who should be authorized to decide whether, if, and how these innovations are made available to society. What are the moral issues? What kind of accountability could be important?

The scientists argue to enable ethical improvement in robotics to administer the social standards and safety of human beings.

Are we ready to poise solution to this upcoming challenge?

[1]Winfield, A. Ethical standards in robotics and AI. Nat Electron 2, 46–48 (2019). https://doi.org/10.1038/s41928-019-0213-6

3
Creative contributions

Deep learning

Loading...
Udruga Mladih UMNO
Udruga Mladih UMNO Oct 12, 2020
Like in chess and in alfa go, ai has only one goal. To win. It plays milions of games untill it is a self tought winning machine. Its the same with human ethics. We need to make a win situation as a goal for ai. Then it needs to run a virtual simulation in wich it tries to win. It just takes a "chess board", winning conditions, and a strong "chess" ai.
Please leave the feedback on this

It's just a set of rules, right?

Loading...
J
Juranium Oct 20, 2020
Human ethics is all about right or wrong. If something is right or wrong, we can usually tell by the feedback we get. If most of the people respond positively, sounds like right.

Therefore, it seems like an easy AI task to determine the outcome of every action to define "the right and the wrong" decisions. If we enrich the interpretation of the outcome with emotions extracted from human feedback (way of writing, emoticons, likes/dislikes), we could get even deeper level of understanding the morality.
Please leave the feedback on this
Loading...
Spook Louw
Spook Louw8 months ago
I agree if you look at ethics as a list of rules, it should be simple to program AI to know all these rules.

The problem, I think, is that that isn't exactly what ethics are. And I believe that we won't be able to teach AI to comprehend ethics because, in truth, we can't even understand ethics ourselves. Ethics is defined as "the philosophical study of the concepts of moral right and wrong and moral good and bad, to any philosophical theory of what is morally right and wrong or morally good and bad, and to any system or code of moral rules, principles, or values." - https://www.britannica.com/topic/ethics-philosophy

While it is essential for lawmaking and keeping society civil, and could even be argued to be what separates us from animals as animals act purely on their instincts, there is no definitive ethical code.

Even if we agree (which we don't) that ethics should be approached in a utilitarianist fashion, in which actions are judged as being good or bad depending on whether the greatest amount of people benefit from it or not, we would still be stuck with what constitutes "benefit".

Let's take an extreme example to illustrate the point. If we cut down all the trees in the world, more people would benefit financially. There'd be jobs created to cut down the trees and utilize the material, there'd be abundant firewood (for a while) for people to use for heating and cooking and there'd be more space for farms and settlements.

Obviously, we know, cutting down all the trees would mean the end of civilization. We definitely shouldn't do that.
But that's just because I used an extreme example now, how do we decide how many trees we can afford to cut down for the benefit of people?

That is an ethical problem, one where simply looking at the greater good does not give us a clear answer.

So, ethics is always changing to fit the current situation, it needs to be fluid. Simply installing a rigid code of ethics would not make AI understand ethics, it would simply determine how they act. This could have serious negative outcomes as well.

Imagine programming a robot to always make sure that the greater good of Humanity is met. At this moment, with the problems of overpopulation and depleting resources, the greatest good for humanity, would be to kill 49.99999...% of humanity. The remaining 50.00000...% would undoubtedly be better off, yet, it'd be hard to argue that killing 3.8 billion people would be the ethical thing to do.
Please leave the feedback on this
Loading...
J
Juranium8 months ago
Spook Louw I am glad I provoked this discussion with my simple answer.
I don't think ethics are a simple set of rules. It's a highly dynamic, ever-changing, and flexible system.
But so is, for example, psychotherapy or education. There are some basic rules that need to be followed, but in general, it's guided improvisation. Not all patients require the same words of consolation or motivation, nor all the kids can be thought maths the same way. We make mistakes, learn and apply.

If I were to build an AI that understands ethics, I would apply the same principle by using machine learning concepts. I even think robots could comprehend human ethics even better than humans.
Please leave the feedback on this

Teach the robots by allowing them to be "on board"

Loading...
J
Juranium Apr 02, 2021
One idea of how to learn AI to comprehend human ethics could be to use machine learning from chats, forums, and other content-based sources to extract questions, answers, reactions, "environment" and other information necessary to understand the context of the topic. It depends on the quality of the code and programmers developing this would have a hard task.

The other, additional or separate idea could be to program a robot to "listen and write down" everything a person does. Just like the Tesla car is collecting info about driving conditions, traffic, and the driver itself, the robot could also go through this "just watching and learning" phase.

A company developing an AI robot could implement a regular use of small custom-made tracking devices that record a person's reactions, talks, gestures, etc., and use it to "feed" the robot. That way, a robot could be "on board" with a person and extract all the information needed to develop a dynamically changing ethical system. Slowly, robots could be allowed to suggest a certain solution and the solution would be tested by a person.

With millions of people doing this already (listening to us and using it preferentially for personalized advertisements), location-, time- and situation-specific reactions of an AI, that follow the same ethics principles as humans, could be reached.
Please leave the feedback on this

Add your creative contribution

0 / 200

Added via the text editor

Sign up or

or

Guest sign up

* Indicates a required field

By using this platform you agree to our terms of service and privacy policy.

General comments