“Don’t Believe Everything You Read Online”: How AI Fact-Checking Could Challenge Political Bias in Science Information Processing
AI Trust
As social media has grown into many people’s primary news source, so has its potential for misinformation. Facebook and X have both launched fact-checking tools to combat so-called fake news. Yet many users dismiss the fact-check itself as false, especially when it challenges their preexisting views.
Could an AI fact-checker seem more objective and help change minds?
Science communicators who strive to share information may find that fact-checking does little to sway opinion on politicized topics. Partisan views on issues such as climate change and COVID-19 can distort readers’ perceptions and make them more likely to accept false information and reject any fact checks.
Won-Ki Moon, University of Florida College of Journalism and Communications Advertising assistant professor and University of Texas at Austin Professor Lee Ann Kahlor examined partisan bias in how people process false information about science topics. They examined whether one’s political affiliation affected their perception of fact-checking and compared AI-generated fact checkers to traditional human ones. Could AI, which is ostensibly less biased, help viewers challenge their pre-existing beliefs and consider that the false information might actually be false?
Fact-checks often fail because they trigger cognitive dissonance, a phenomenon in which an individual feels uncomfortable when faced with information that challenges deeply held beliefs. As certain science topics have become politically loaded, one’s partisan identity feeds into how they perceive these issues. They are more likely to accept information that supports their preferred political party.
Attempts to challenge this preconception can backfire as they only reinforce that sense of group loyalty. As a result, fact checks may seem like a threat to one’s personal beliefs. By rejecting them, the reader can comfortably accept the misinformation as credible.
Fact-checkers have traditionally been experts in a given field. That authority doesn’t always translate into credibility. In the current political climate, many individuals distrust authority sources and seek information that “feels true.” Could AI fact-checkers make them question this presumption and reconsider what they’re reading?
To answer this question, the researchers asked participants to read a Twitter post with false, politically biased information about climate change, nuclear energy or COVID-19. The story criticized either the Democratic or Republican Party for undesirable actions (e.g. sending COVID funds to their own party or hiding the dangers of nuclear waste) by highlighting an opposing politician who blamed them.
Participants then read a fact-check that refuted the story by providing evidence and citing a source (either human scientists or AI).
Finally, they evaluated both of the source of the message and the message itself for credibility.
The team found that participants who read false information about the opposing political party reported higher credibility of both the message and the source than those who read fake news about their party. The more credible the source seemed, the more likely they were to accept the story. These results tracked with previous research on partisan bias in news processing.
However, the fact-checking experienced less of a partisan bias when done by AI. Participants were likelier to rate the fact-check as objective, balanced, and trustworthy, even if it refuted a negative message about the opposite party. Moreover, they showed less trust in stories that benefited their party once an AI fact-check refuted them.
While people may still accept misinformation that supports their existing beliefs, the AI fact-check seems to disrupt the process. The authors suggest that AI’s dual nature as an objective, yet flawed, source may encourage readers to question what they’re reading. It disrupts their normal method of determining credibility: “Does this information feel true based on my social identity?”
This research suggests that AI fact-checkers, while imperfect, may help science communicators combat misinformation about politically charged topics. Even if they can’t directly change minds, they can reduce the effects of partisan bias on how people read the information. In an era where people are constantly inundated with news catered to their views, AI-generated fact checks could be crucial to reclaiming a healthy skepticism in online media consumption.
The original article, “Fact-checking in the age of AI: Reducing biases with non-human information sources,” was published in the March 2025 issue of Technology and Society.
Authors: Won-Ki Moon, Lee Ann Kahlor
The summary was written by Rachel Wayne.
Posted: December 12, 2024
Insights Categories:
AI, Trust
Tagged as: AIatUF, Artificial Intellgence, Fact-Checking, Trust, Won-Ki Moon