Can AI help us be better people? News-thread

0
65

Explore

a question for jon wheelsa PhD candidate and La Caixa INPhINIT fellow at the University of Granada, where he studies the intersection between bioethics, the ethics of emerging technologies and the philosophy of biomedical innovations.

In the body image
Photo courtesy of Jon Rueda

Can AI help us be better people?

Ythat. I have published a new one Article with a colleague, Bianca Rodríguez, in which we argue that AI assistants could indeed help us improve some aspects of our morality. Some AI models aim to make us more aware of some of the limitations of our psychology when we try to decide what to do or provide relevant factual information. Some of these AIs start by learning your values ​​and preferences, and then try at specific times to offer a better course of action. These are controversial in some ways, because they are not going to improve your ability to make your own decisions. We look at another, more promising system called the Socratic Assistant, or SocrAI, which is primarily based on the idea that through dialogue we can advance our knowledge, think about complex moral issues, and improve our moral judgments.

This AI-based voice assistant has not been commercially developed. But I know that there is interest because one of the defenders of this idea, the philosopher Francisco Lara, told us that some companies approached him about it. This interest will grow. Due to the very famous ChatGPT, there is a growing awareness of how AI is improving. We feel like we are having a real conversation with an AI system.

The AI-based Socratic assistant we discuss in our article would not necessarily be trained on the words of Socrates as we know them from Plato’s writings; he would simply try to emulate his Socratic method. It is based on a more procedural understanding of ethics, which is the most philosophically provocative aspect of our article. This Socrates is not going to tell you, “You have to do that”, at a specific moment, but will help you to improve your reasoning: to consider empirical facts, to think more logically and coherently. So he won’t tell you what is right or wrong. Socrates never says what is the truth, the concrete truth. But through the dialogues he tells us what are the weak points of his arguments. Through irony, he tells you that what you have said can be countered. And in that process you learn and improve your moral reasoning.

We are optimistic in our article, but there are also many concerns that we do not address, such as data protection: What will happen to the data that is being created through interaction with users? This data is also important and will help improve the system.

These systems could also have some sort of problematic tendency to shape people’s autonomy and agency. AI could influence our character and manipulate or push us into certain types of behavior. There could also be a moral skills disqualification issue. Imagine that we create a kind of dependency on these systems, and if these systems don’t protect our autonomy, if people start deferring to the advice of AI systems when making ethical decisions, in the long run that could be negative. So it’s hard to have a balanced appreciation of this technology.

Would it be nice for children to grow up with a Socratic assistant? I have an intuition that we should be more protective of children because they are still developing. They are creating their own autonomy, and it is more sensible to try not to offer technologies that limit or reduce it. But, on the other hand, children are already exposed to other types of technologies that can manipulate them, that shape their preferences and perspectives. So the relationship between children and new technologies is something that is already happening. And of course AI applications could play a role in this, and if we give children good tools to improve their moral skills, that would be good, but we should also be more concerned about the harmful effects.

Some people argue that due to our evolutionary history, we are more biased towards those who are closest to us in time and space, and that we have many tendencies to be biased, and that AI could help us be more like an observer. ideal. . This view, in a sense, is also problematic, because we know that AI systems have different kinds of biases. Some of these biases are particular to AI, but they are very common and very similar to the biases we have in our psychology. In that sense, AI could not only reproduce but also amplify human biases, so we should not be too optimistic about using AI to overcome the limitations of our moral psychology.

Lead image: Mariart0 and Sabelskaya / Shutterstock


LEAVE A REPLY

Please enter your comment!
Please enter your name here