Decoding the Digital Dialogue: A Two-Step Framework for Human-AI Interaction
AI Communication and Technology
In an era where artificial intelligence is becoming increasingly ubiquitous, from AI financial advisors to AI romantic companions to virtual mental health therapists, the line between human and machine communication is blurring faster than ever. As humans grapple with this new reality, researchers are proposing novel ways to understand and navigate our evolving relationship with AI.
University of Florida College of Journalism and Communications Associate Professor in Emerging Technologies Kun Xu collaborated with Jingyuan Shi from Hong Kong Baptist University on a conceptual paper that introduces a two-level framework for human-machine communication (HMC).
Their paper offers a fresh perspective on how we interact with and understand artificial intelligence. This innovative approach aims to bridge the gap between two emerging fields: explainable AI (XAI) and human-machine communication.
The first level of the framework focuses on direct interactions between humans and AI, examining how people engage with technologies that act as communicators. This includes studying user experiences with chatbots, voice assistants and other AI-driven interfaces.
The second level explores how individuals perceive and evaluate explanations about AI’s inner workings. This aspect of the framework addresses the growing need for transparency in AI systems, acknowledging that users are increasingly curious about the mechanisms behind AI-generated recommendations and decisions.
By combining these two levels, the researchers propose a more comprehensive approach to understanding human-AI interactions. This dual perspective allows for a nuanced examination of both the surface-level engagement with AI and the deeper comprehension of its underlying processes.
The paper emphasizes the importance of incorporating human elements in AI systems, suggesting that explanations about human participation in data annotation, outcome verification and model selection could significantly influence users’ perceptions of AI. This human-in-the-loop approach aims to enhance trust and understanding in AI systems by making their decision-making processes more transparent and relatable.
Furthermore, the researchers introduce the concept of “message production explainability” as a crucial dimension in understanding AI technologies. This aspect focuses on how users perceive the interpretability and transparency of an AI system’s message production process, adding a new layer of complexity to the study of human-machine communication.
The proposed framework offers significant implications for both theoretical research and practical applications in AI development. It encourages a more holistic approach to designing AI systems, taking into account not only the user interface but also the need for clear explanations of AI’s internal mechanisms.
As the authors note, “As we approach an exciting but uncertain future of using and innovating AI technology, we face a growing demand for understanding how AI works, who develops and controls AI, and why AI makes certain recommendations.” Their two-level framework provides a roadmap for addressing these pressing questions, paving the way for more transparent, understandable, and user-friendly AI systems.
As AI continues to integrate into our daily lives, frameworks like this will be essential in ensuring that we can effectively communicate with, understand and trust our increasingly intelligent digital counterparts.
The original paper, “Visioning a two-level human–machine communication framework: initiating conversations between explainable AI and communication,” was published in Communication Theory on July 30, 2024.
Authors: Kun Xu and Jingyuan Shi.
This summary was written by Gigi Marino.
Posted: September 18, 2024
Insights Categories:
AI, Communication and Technology
Tagged as: AIatUF, Artificial Intelligence, human machine communication, Kun Xu