Open Case Study

01
Interactive Survey

100% Happy: Who’s to Argue with the Algorithm?

The criteria for the success of Emotion Recognition AI are based on the same logic and knowledge used for the development of the ER algorithms themselves. An algorithm is deemed accurate as long as it replicates the labels of its training data set. The training set is usually labeled by experts according to the Facial Action Coding System, which taxonomizes facial muscle movements by associating them to emotion expressions. Such criteria are narrowly derived from a closed system of knowledge and have not been sufficiently open to questioning by external expertise and even less so, matched with the subjective accounts of the persons in the analysed pictures. The present interactive survey aims to reverse the commonplace methodology of emotion recognition data collection by developing more transparent and subject-centered methods for data collection and verification. Here the participant is invited to attests to the accuracy of the ER assessment and choose how to label the emotional states in the pictures. Comparing the automatic assessment of emotions with participants’ first person account of the emotion they felt at the time of the taking of the photographs raises the opportunity to note and classify disparities between the subjective and the automated readings of these facial expressions. The data accumulated through this website will lay the ground for a larger study exploring the ways in which digital uses of emotion or emotion surveillance restructure our understanding of emotion, as well as our relationship to ourselves. Important aspect of this work is the consideration of the ethical implications of emotion recognition technologies and demonstrating the need for their regulation.

Use one to five words.
Selected Value: 0
Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn