User Perceptions and Trust of Fake News Detection Technology
AI Trust
The term “fake news” and its connotations took hold of the country during Donald Trump’s presidency. Still, the proliferation of unchecked journalism and false reporting has led to adopting AI tools to wade through the mess. This begs the question, do online users trust AI to weed out fake news?
A recent study by University of Florida College of Journalism and Communications scholars Jieun Shin and Sylvia Chan-Olmsted sought to understand the perception of online users of fake news detection technology and whether they trust that it’s doing the job. Do online users believe that machine learning and innovation can detect misinformation within journalism?
The answer they learned was that it depends. While younger users were more accepting of the technology and its capabilities, individuals with more experience with fact-checking tools — AI, and machine learning in general — were more likely to trust the fake news detector’s findings. Individuals who also doubted their own abilities to detect fake news were more likely to trust the technology’s abilities and outcomes.
The researchers also found that trust levels were higher when users perceived the application to be highly competent at detecting fake news, highly collaborative in terms of working with human users, and have more power in working autonomously. Also, users were more likely to trust the application when the technology was perceived to have lower levels of complexity.
The authors suggest that the apolitical nature of AI could contribute to the overall level of trust users are willing to offer. Ultimately, trust in the technology was paramount for perceived user adoption. If the AI couldn’t be trusted, there was no reason to use it to parse through misinformation.
According to Shin, “While some people view automated fact-checking tools as a breakthrough in combating misinformation, others are worried about delegating the task of truth validation to computers. Considering the speed and volume of misinformation spreading in online space, it is inevitable for fact-checkers, platforms, and consumers to rely on software that helps them identify false information to a certain extent. Therefore, we should put more effort into understanding the resistance to AI systems as much as the development of automated tools themselves.”
Shin and Chan-Olmstead view their research as a starting point for future work looking at fake news detection technology and the trust placed in it by users. The authors contend that users will not adopt the technology if there is no level of trust. They hope that future research will analyze data from more than one application and contend that future research should also consider ideological and user attitudes toward the concept of fake news, which can be a partisan construct.
The original article, “User Perceptions and Trust of Explainable Machine Learning Fake News Detectors,” was published in the International Journal of Communication.
Authors: Jieun Shin, Sylvia Chan-Olmsted
This summary was written by Dana Hackley, Ph.D.
Posted: March 6, 2023
Insights Categories:
AI, Trust
Tagged as: AI, AIatUF, Fake News, Jieun Shin, Sylvia Chan-Olmsted