How Chatbots Can Effectively Communicate Science Information
AI Communication and Technology
Earlier this year, the New York Times ran a story titled “Chatbots Are Here, and the Internet Industry Is in a Tizzy.” Love them or hate them, chatbots are here to stay. But are they effective communicators, particularly when it comes to dispensing scientific advice?
University of Florida College of Journalism and Communications Advertising Assistant Professor Jinping Wang and Lulu Peng, a postdoctoral associate at Huazhong University of Science and Technology, set out to explore this very question. The researchers were primarily interested in how persuasive chatbots can be in engaging audiences when discussing important scientific topics.
Wang and Peng note that science communications has evolved beyond merely disseminating information and are used to change attitudes, beliefs and behaviors.
The researchers looked at how effective the emotional appeals of fear and hope are on engaging audiences and motivating them to action.
The researchers also investigated the role of anthropomorphism — the degree to which chatbots exhibit human-like characteristics. The hypothesis was that the inclusion of faces, names and personalities enhances emotional connection and receptivity.
The team studied two different topics — skin cancer prevention and biodiversity conservation. In the first study, the researchers chose sunscreen use and the messages were delivered two different depictions of bots: a smiling avatar wearing a headset and non-descript conversation bubbles. Four bots were created expressing fear and hope in two-way conversations with participants.
The most persuasive model was the humanlike bot that used first-person pronouns like “I” and “we” and addressed the humans by name talking about the dangers of ultraviolet radiation, i.e., fear messages. The study participants reported that they were now more mindful about sunscreen use and were going to make more of an effort to wear sunscreen whenever they were outside. Conversely, a less anthropomorphic model proved more effective in conveying low-fear messages.
“A nice and friendly chatbot vs. a machinelike chatbot,” the authors write, “may further reduce risk perceptions in the hope appeal condition, rendering hope appeals more lighthearted and less compelling.”
For biodiversity conservation, the researchers presented participants with a donation scenario involving the World Wildlife Fund. Fear and hope messages framed the discussion, delving into personal and societal threats arising from the loss of biodiversity. Consistent with Study 1, their findings revealed a nuanced interaction between emotional appeals and anthropomorphic cues.
While mindful anthropomorphism — conscious and deliberate attribution of human-like traits, emotions, or intentions to non-human entities — did not significantly differ across conditions, mindless anthropomorphism — the unconscious or automatic attribution of human-like characteristics to non-human entities — revealed a pronounced variance. Interacting with the more human-like bots triggered mindless anthropomorphism, which suggest that the participants unconsciously applied human qualities and social rules, influencing how they processed the information and behaved afterwards.
The results underscored a matching effect — fear appeals were more compelling with the more human-like bots, but hope appeals found greater success with less anthropomorphic counterparts. These findings highlight the relationships between the emotional appeals and the anthropomorphic cues is affecting the outcomes.
Additionally, personal risk perception emerged as a key psychological element, mediating the connection between mindless anthropomorphism and donation intention. When fear appeals intensified the personal risk for participants, their donation intention increased. However, the researchers did not see this mediation in hope-framed communications.
This research illuminated the interplay between emotional appeals and anthropomorphic cues in chatbot-driven science communication. As we navigate the digital landscape, understanding these dynamics becomes crucial. Integrating this knowledge, practitioners can improve the efficacy of chatbots in persuasive science communication campaigns, addressing critical issues like skin-cancer prevention and biodiversity conservation. Leveraging emotional appeals and human-like avatars, developers can train their chatbots for impactful science communication.
The original paper, “Striking an Emotional Chord: Effect of Emotional Appeals and Chatbot Anthropomorphism on Persuasive Science Communication,” was published online in Science Communication on Sept. 7, 2023.
Authors: Jinping Wang and Lulu Peng.
This summary was written by Gigi Marino.
Posted: November 29, 2023
Insights Categories:
AI, Communication and Technology
Tagged as: Chatbots, Jinping Wang, science communication