Study: AI and Crowdsourcing Labels Can Minimize Biases in the Fact-Checking Process
A new study has found that artificial intelligence (AI) and crowdsourcing labels can minimize biased perspectives in fact-checking processes. The findings by Won-Ki Moon, University of Florida College of Journalism and Communications Advertising assistant professor, and Northeastern University Journalism Assistant Professor Muojong Chung were featured in “AI as an Apolitical Referee: Using Alternative Sources to Decrease Partisan Biases in the Processing of Fact-Checking Messages” published in Digital Journalism on Sept. 14.
The article focuses on fact-checking efforts that have yielded limited success in combating political misinformation due to partisan-biased information processing. In the study, the authors examined how different source labels of fact-checking messages from either human experts, AI, crowdsourcing or a human expert-AI hybrid might influence partisans’ processing of fact-checking messages.
According to the authors, “Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.”
Motivated reasoning is a cognitive and social response in which individuals, consciously or unconsciously, allow biases to affect how new information is perceived.
They add, “While research on the human-AI collaboration is still at a nascent stage relative to the broader AI literature, further exploration of these intricacies and the underlying mechanisms will allow us to better develop effective fact-checking messages utilizing the human-AI collaboration.”
Posted: September 18, 2023
Category: AI at CJC News, College News, Research News
Tagged as: Adertising, AI, Digital Journalism, Fact-Checking, Misinformation, Motivated Reasoning, Won-Ki Moon