Study: Human-Machine Communication and Explainable Artificial Intelligence Can Help Influence Humans’ AI Perceptions and Attitudes
A new study has found that human-machine communication and explainable artificial intelligence can help influence humans’ perceptions and attitudes toward artificial intelligence (AI).
The findings by Kun Xu, University of Florida College of Journalism and Communications (UFCJC) associate professor in emerging media, and Hong Kong Baptist University Associate Professor Jingyuan Shi are featured in “Visioning a Two-Level Human–Machine Communication Framework: Initiating Conversations Between Explainable AI and Communication” published in Communication Theory on July 30.
Xu and Shi examined the growing field of explainable artificial intelligence (XAI), the class of systems that provide visibility into how an AI system makes decisions and predictions and executes its actions. They suggest that a conversation between human-machine communication and XAI is both necessary and advantageous.
According to the authors, “Although research combining XAI and communication is still limited, communication research revolving around messages could be fertile ground for understanding the various outcomes of explanations.”
They add, “As we approach an exciting but uncertain future of using and innovating AI technology, we face a growing demand for understanding how AI works, who develops and controls AI, and why AI makes certain recommendations. This article explores the areas in which the communication scholarship, especially human-machine communication and the XAI scholarship can be bridged.”
Posted: August 27, 2024
Category: AI at CJC News, College News
Tagged as: AI, explainable artificial intelligence, human machine communication, Kun Xu