Can AI Speak Our Language: Exploring Human-Machine Communication
AI
In just a decade, artificial intelligence (AI) has become a normal part of our daily lives. Whether we’re consulting Alexa for the weather, asking Netflix to choose movies for us or trusting algorithms for medical diagnoses, we question how these technologies make decisions — and who ultimately leads the communication. AI or human?
As AI continues to play an increasing role in both personal and professional spheres, the need to understand not just the outputs of these technologies, but also the processes behind them, becomes ever more critical.
This gap between AI’s capabilities and human understanding is at the heart of the research led by Kun Xu, associate professor in emerging technologies at the University of Florida’s College of Journalism and Communications, and his colleague Jingyuan Shi, associate professor of interactive media at Hong Kong Baptist University.
Their work merges two growing fields — explainable AI (XAI) and human-machine communication (HMC) — to address how we can begin to bridge the divide between AI’s decision-making processes and human comprehension.
Xu and Shi suggest that a conversation between XAI and HMC is not just helpful but also necessary. XAI focuses on making the internal workings of AI more transparent and understandable, while HMC explores how humans and machines communicate. By connecting these two fields, their research advocates for a more comprehensive approach to human-AI interaction.
The central thesis of their work presents a two-level human-machine communication framework. The first level covers how humans interact with AI, focusing on the outputs and decisions made by these systems. At the second level, the researchers argue that we must also account for how those decisions are made, highlighting the transparency of AI’s inner workings.
By bridging the gap between these two levels — understanding both the “what” and the “how” of AI decision-making — the research offers a roadmap for how we can build trust in and comprehension of increasingly complex technologies.
This study underscores the need to make AI’s decision-making processes visible and interpretable, particularly in everyday use. Whether it’s a recommendation system on a streaming platform or an algorithm used in medical diagnostics, transparency is crucial for building trust with human users.
“While communication researchers have growingly stressed the importance of how AI can be communicative, the question of how AI can be communicated remains understudied,” the researchers write.
The team’s work paves the way for future studies to explore the intersection of communication and AI, urging further collaboration between communication scholars and AI developers to ensure that future systems are both intelligent and understandable. This research offers practical implications not only for AI design but also for how we approach the evolving relationship between humans and machines to ensure more effective and trustworthy AI interactions in the future.
The original paper, “Visioning a two-level human–machine communication framework: initiating conversations between explainable AI and communication,” was published in Communication Theory, Volume 34, Issue 4, November 2024.
Authors: Kun Xu and Jingyuan Shi.
This summary was written by Gigi Marino.
Posted: January 30, 2025
Insights Categories:
AI
Tagged as: AI, AIatUF, Kun Xu