BMClogo

Assuming you are proven, AI tools provide accurate predictions for certain stocks you own. How do you feel about using it? Now, suppose you are applying for a job in a company that is filtered using an AI system in the HR department. Are you satisfied with this?

A new study found that people are neither completely enthusiastic nor completely dislike AI. Instead of falling into technicalists and Ludit camps, people are picky about the practical results of using AI.

“We suggest that when AI is considered to be more capable than humans, AI appreciation occurs and is considered unnecessary in a given decision-making environment,” said Jackson Lu, professor at MIT. “AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are met.”

The paper, “AI Abhorrence or Appreciation? Ability – A Review of Humanized Framework and Meta-Analysis”, appears in Psychological Announcement. The paper has eight co-authors, including Lu, an associate professor of career development at the MIT Sloan School of Management.

New framework adds insight

There has been a long and wide debate on how people react to AI, often resulting in seemingly different findings. The influential 2015 paper on “algorithm aversion” found that people are less tolerant of errors generated by AI than human errors, while the widely famous 2019 paper on “algorithm appreciation” found that people prefer AI’s suggestions.

To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 previous studies comparing preferences for artificial intelligence with humans. The researchers tested whether the data supported their proposed “competency-personalization framework”, where both the perceptual ability and the need for personalization of AI shape our preference for AI or humans in a given environment.

Throughout 163 studies, the research team analyzed more than 82,000 responses to 93 different “decision-making environments”, for example, whether participants were using AI in cancer diagnosis. Analysis confirms that the framework of competence and personality does help explain people’s preferences.

“Meta-analysis supports our theoretical framework,” Lu said. “Both dimensions are important: individuals assess whether AI is more capable than people with a given task, and whether that task requires personalization.

“The key idea here is that being able to perceive alone does not guarantee the appreciation of AI. Personalization is also important,” he added.

For example, people tend to favor AI when detecting fraud or classifying large data sets – AI capabilities exceed human capabilities and do not require personalization. But in situations like treatment, job interviews, or medical diagnosis, they have greater resistance to AI, and they feel humans can better identify their own unique situations.

“People’s fundamental desire to see themselves as unique and distinctive,” Lu said. “AI is often seen as impersonal and operates in a rote way. Even if AI is trained on a lot of data, people think that AI cannot grasp its own personal situation. They want a human recruiter who can see it as a human doctor who is different from others.”

Context is also important: from tangible to unemployment

The study also found other factors that influence individuals’ preference for AI. For example, compared with intangible algorithms, the AI ​​appreciation of tangible robots is more obvious.

The economic environment is also very important. In countries with low unemployment rates, AI appreciation is more obvious.

“It’s intuitive,” Lu said. “If you’re worried about being replaced by AI, then you’re unlikely to accept it.”

Lu is continuing to study the complex and evolving attitudes towards AI. Although he did not view the current meta-analysis as the last word on the matter, he hoped that the competency-personalized framework provides a valuable perspective for understanding how people evaluate AI in different situations.

Lu concluded: “We are not claiming that competence and personalization are the only two important dimensions, but according to our meta-analysis, these two dimensions capture much of the preferences people have for human AI in extensive research.”

In addition to Lu, the paper’s co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.

This study was granted in part by QIN and WU from the National Natural Science Foundation of China.

Source link