When the Machine Learns from Users, is it Helping or Snooping?
AI Communication and Technology
Most consumers today are confronted with algorithmic-based recommendations based on their past usage or buying history. But how do consumers actually feel about machines recommending media or goods they might like?
A new study by Won-Ki Moon, University of Florida College of Journalism and Communications Advertising assistant professor, and colleagues found that online users don’t mind machine-learning knowing their likes and dislikes and adapting accordingly. Individuals that use platforms such as Netflix or YouTube have reported they don’t find machine-learning algorithms offering suggestions based on their viewing history intrusive or an invasion of privacy.
In fact, the authors contend that increasing the transparency of AI increases its acceptance to inform user decisions and reduces user frustration. The study suggests that users are even more forgiving of the learning curve if they know algorithms are actively working behind the scenes to get up to speed on their preferences.
Previous studies have suggested that if users knew their data was being tracked, they would distrust the platform and feel as if their privacy had been invaded. However, according to this study, in this day and age of digital personal assistants, the idea of data tracking is no longer deemed snooping. Being transparent regarding the platform’s AI inner workings enhanced trust among users despite any shortcomings in the AI’s performance.
The researchers used a cartoon or meme-type “helper” to ask for patience and explain that the learning algorithms are working hard to understand the user’s data, making the user more understanding due to human nature and societal norms regarding customer service.
Future research should use actual websites that are professionally developed as opposed to the contrived website used for the study’s purposes. Additionally, future research should study the user-AI interaction over an extended time to gauge whether the user continues to trust the AI if its performance doesn’t show improvement.
The original article, “When the Machine Learns from Users, Is it Helping or Snooping?”, appeared in Computers in Human Behavior, Volume 138, January 2023.
Authors: Sangwook Lee, Won-Ki Moon, Jai-Gil Lee, S. Shyam Sundar
This summary was written by Dana Hackley, Ph.D.
Posted: October 13, 2022
Insights Categories:
AI, Communication and Technology
Tagged as: AIatUF, Algorthims, Recommendation Engines, Won-Ki Moon