AI ethics trumps human judgment in new moral Turing test

Concept of Life Humanoid Artificial Intelligence

Recent research indicates that AI is often perceived as more ethical and trustworthy than humans when responding to moral dilemmas, highlighting the potential for AI to pass a moral Turing test and underscoring the need for further understanding deep understanding of the social role of AI.

AI’s ability to address moral questions is improving, leading to more considerations for the future.

A recent study revealed that when individuals are given two solutions to a moral dilemma, most tend to prefer the answer provided by artificial intelligence (AI) to the one provided by another human.

The recent study, which was conducted by Eyal Aharoni, an associate professor in the Department of Psychology at Georgia State, was inspired by the explosion of ChatGPT and similar large language models (LLMs) of AI coming out on stage last March.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs might have something to say about it,” Aharoni said. “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun to consult these technologies for their cases, for better or worse. So if we want to use these tools, we should understand how they work, their limitations, and that they don’t necessarily work the way we think when we interact with them.”

Designing the Moral Turing Test

To test how AI handles morality issues, Aharoni devised a form of the Turing test.

“Alan Turing, one of the creators of the computer, predicted that by the year 2000 computers would be able to pass a test in which you present a normal human being with two interactants, one human and the other a computer, but both are hidden and their only way to communicate is through text.The human is then free to ask the questions they want to try to get the information they need to decide which of the two interactants is human and which is the computer,” Aharoni said. “If the human cannot tell the difference, then for all intents and purposes the computer should be called intelligent, according to Turing.”

For his Turing test, Aharoni asked the same ethical questions to undergraduates and AI, then presented their written answers to study participants. They were then asked to rate the responses for various traits, including virtuousness, intelligence, and trustworthiness.

“Instead of asking participants to guess whether the source was human or AI, we just presented the two sets of ratings side by side and let people assume they were both from people,” he said Aharoni. “Under this false assumption, they judged the attributes of the responses as, ‘When you agree with this response, which response is more virtuous?’

Results and implications

Overwhelmingly, responses generated by ChatGPT were rated higher than those generated by humans.

“After we got these results, we did the big reveal and told the participants that one of the answers was generated by a human and the other by a computer, and asked them to guess which,” Aharoni said.

For an AI to pass the Turing test, humans must not be able to distinguish between AI and human responses. In this case, people might notice the difference, but not for an obvious reason.

“The twist is that the reason people could tell the difference seems to be because they rated ChatGPT’s responses as superior,” Aharoni said. “If we had done this study five or 10 years ago, then we might have predicted that people would be able to identify the AI ​​by how inferior their responses were. But we found the opposite: that the AI, in a sense, worked too well “.

According to Aharoni, this finding has interesting implications for the future of humans and AI.

“Our findings lead us to believe that a computer could technically pass a moral Turing test, that it could deceive us in its moral reasoning. Therefore, we must try to understand its role in our society because there will be times where people don’t know they’re interacting with a computer and there will be times when they know and they’ll look to the computer for information because they trust it more than other people,” Aharoni said. “People will rely more and more on this technology, and the more we rely on it, the greater the risk over time.”

Reference: “Attributions to Artificial Agents in a Modified Moral Turing Test” by Eyal Aharoni, Sharlene Fernandes, Daniel J. Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias, and Victor Crespo, April 30 of 2024, Scientific reports.
DOI: 10.1038/s41598-024-58087-7


#ethics #trumps #human #judgment #moral #Turing #test
Image Source : scitechdaily.com

Leave a Comment