Do Consumers Trust AI in the News-Production Process?
AI Audience Trust
With the advent of technologies like ChatGPT, Midjourney and DALL-E, AI has become a household name when it comes to content production. AI can create just about anything you want in moments: it will write a letter, a book, a job application; it will compile digital art or splice together a movie; it will hand you piles of code or a fully formed website.
But what happens when it tries to tell you the news?
The ability to generate large amounts of content with potentially little to no human intervention has made AI content creation an attractive tool to media platforms, including the news media. The issue of AI use in the news is a particularly complicated one, however, given the impact that news has on day-to-day life. Unlike many other forms of media, news reporting has a great potential to directly impact political, social and financial systems, making trust in the news a vital concern.
News media companies that wish to integrate AI into their content production for gains in efficiency must tackle two important questions: will consumers accept the use of AI in news, and in what way would they consume and trust the content that is created?
Researchers Sylvia Chan-Olmsted, professor and director of Media Consumer Research at the University of Florida College of Journalism and Communications, and Steffen Heim, Ph.D. candidate at Helmut-Schmidt-University Institute of Marketing in Hamburg, Germany, examined the issue of consumer trust in AI-created news, casting a wide net to understand the various factors that influenced how participants viewed the use of AI in news media, especially from the perspective of an integration spectrum.
To do so, they conducted a study asking participants to assess their preferred level of AI integration in the news production process, which was split into two phases: discovery and information gathering (discovering newsworthy topics, and then assigning and gathering relevant information about those topics) and writing and editing. For the levels of AI integration, participants were presented with a continuum that ranged from human-led production (no AI use at all, or AI only being used as support for human work), humans and AI working equally on production, or AI-led production (humans being used as support for AI, or AI being used exclusively).
Study findings suggested that participants generally preferred lower levels of AI integration in both the discovery and writing phases of news production, though they did not reject AI use provided it was primarily in support of human work.
Several factors played into these preferences. For example, participants who were highly motivated to engage with news tended to prefer human-centric news production, while those with weaker engagement were more open to AI integration. Trust levels in news produced with AI supporting humans was higher than entirely human-created news, especially when AI was used in the discovery and information gathering phase.
Overall, the study showed that consumers were willing to accept AI in news media and even find it trustworthy and useful – as long as the technology didn’t start taking control of the process. This research has also provided a first look at the factors influencing AI preferences and trust in the realm of news media, providing valuable insights to content creators and news distributors alike.
The original article, “Consumer Trust in AI–Human News Collaborative Continuum: Preferences and Influencing Factors by News Production Phases,” was published in Journalism and Media on Sept. 11, 2023.
Authors: Steffen Heim, Sylvia Chan-Olmsted
This summary was written by Vaughan James, Ph.D.
Posted: September 20, 2023
Insights Categories:
AI, Audience, Trust
Tagged as: AIatUF, Sylvia Chan-Olmsted