Developing a Legal Framework for Regulating Emotion AI

Publication Title
B.U. J. Sci. & Tech. L.
Publication/Creation Date
August 21 2020
Jennifer Bard (creator)
University Of Cincinnati (contributor)
University Of Cambridge (contributor)
Media Type
Journal Article
Persuasive Intent

Of all the pressing issues facing the world in the summer of 2020, it might not appear as if establishing a code of ethics for the ethical use of Artificial Intelligence (AI) technology could be put at the same level as combatting Covid-19 or addressing racism. But a team of researchers at the Centre for the Study of Existential Risk at the University Cambridge (the Cambridge Team) trace the harm caused by these events directly to the increased use of AI assisted decision making.

In July 2020 they put out an urgent call to “technologists, ethicists, policymakers and healthcare professionals” asking them “to consider how ethics can be implemented at speed in the ongoing response to the COVID-19 crisis.” Although the Cambridge team did not call for help from law professors, any effort to develop an enforceable AI code of ethics will face significant legal barriers unless we can better articulate the harms attributable to the newest iteration of AI, Emotion AI. Based on a theory that human emotions can be detected through analysis of facial expressions, Emotion AI has become a multi-billion dollar industry and has already been adopted by every company you may have ever heard of for market research, digital interviewing, and interviewing prospective employees. Not only does Emotion AI claim it can read emotions, but in a chilling throwback to murderous computers in 2001 or romantic partners in Her, it claims to convincingly generate emotional responses in its interactions with humans. While I do not purport to provide a code of emotion AI ethics, or more accurately to choose among the many proposed codes that already exist, this paper looks directly at the legal barriers to enforcing such a code when without a shared understanding of what harm is caused by either detecting or imitating feelings and proposes a framework for addressing these harms in the context of existing legal remedies related to protecting against invasions of privacy by both public and private entities. The longstanding concerns that AI assisted decision making recapitulates the biases of society in ways that are difficult to detect and mitigate are still very much with us. But in this article I both make the case that the potential harm of Emotion AI is even more significant because it is so much more difficult to define and create a framework for looking at that harm within the existence of existing legal remedies.
HCI Platform
Relation to Body
Related Body Part
Marketing Keywords
Her, 2001: A Space Odyssey, HAL, Affectiva

Date archived
November 18 2021
Last edited
November 18 2021
How to cite this entry
Jennifer Bard. (August 21 2020). "Developing a Legal Framework for Regulating Emotion AI". B.U. J. Sci. & Tech. L.. University of Florida Levin College of Law. Fabric of Digital Life.