Researchers believe that public confidence in AI varies widely depending on the application as reported in the International Journal of Human-Computer Interaction.
Prompted by the growing importance of artificial intelligence (AI) in society, researchers at the University of Tokyo have studied public attitudes towards the ethics of AI. Their results quantify how different demographic & ethical scenarios affect these attitudes. As part of this study, the team developed an octagonal visual metric, analogous to a rating-system, that could be useful to AI researchers who want to know how their work might be perceived by public.
Many believe that the rapid development of technology often out-paces that of the social structures that implicitly guide & regulate it, such as law or ethics. AI in particular illustrates this because it has become so ubiquitous in the daily lives of so many people, seemingly overnight.
This proliferation, associated with the relative complexity of Artificial intelligence compared to more familiar technology, can breed-fear & mistrust of this key part of modern life. Who dis-trust AI and in what ways are the matters that are useful to get to know for developers & regulators of AI technology, but these questions are not easy to quantify.
Researchers at University of Tokyo, led by Professor Hiromi Yokoyama of the Kavli Institute for Physics and Mathematics of the Universe, set-out to quantify public attitudes towards ethical issues related to the AI. There were 2 questions, in particular, the team, through the analysis of the survey, tried to answer: how attitudes change depending on the scenario presented to an responder and how demographics of the responder them-self changed attitudes.
Ethics can’t really be quantified, so to measure attitudes towards ethics of AI, team used 8 themes common to many AI applications that raised ethical questions: privacy, accountability, security, safety, transparency & explainability, fairness & non-discrimination, human control of technology, professional responsibility and the promotion of human values. These, which the group called “octagon measurements,” were inspired by a 2020 article by Harvard University researcher Jessica Field & her team.
Respondents were given a series of 4 scenarios to judge, acc. to these 8 criteria. Each scenario looked at a different application of AI. These were AI-generated art, customer service artificial intelligence, autonomous weapons & crime prediction.
Respondents also provided researchers with information about themselves such as age, gender, occupation & education level, as well as a measure of their level of interest in science & technology via additional series of supplementary questions. This information was essential for researchers to see what characteristics of people correspond to certain attitudes.
“Previous studies have shown that risk is viewed more negatively by women, older people & those more familiar with the subject. I expected to see something different in this survey given how common AI has become, but surprisingly we’ve seen similar trends here, ”Yokoyama said. “Something we saw was expected, however, was the way the different scenarios were perceived, with the idea of AI weapons met with more skepticism than the other 3 scenarios.
The team hopes the findings could lead to the creation of some sort of universal scale for measuring & comparing ethical issues in AI. This survey was limited to Japan, but the team has already started collecting data in several other countries.
“With a universal scale, researchers, developers & regulators could better measure the acceptance of specific AI applications or impacts and act accordingly,” said assistant professor Tilman Hartwig. “One thing that I discovered during the development of the scenarios and the quiz is that there are many topics within AI that require more meaningful explanation than we thought. It shows that there is a huge gap between perception & reality when it comes to AI.