Lodi Valley News.com

Complete News World

Algorithms that identify feelings raise ethical dilemmas |  science time

Algorithms that identify feelings raise ethical dilemmas | science time

Ugly face, good face, cruel face. It is the way in which everyday language reflects the importance of the face as a means of emotional expression. Find someone’s photo in today’s newspaper in any media. The face will likely convey some emotion that they will know how to recognize. This ability to infer a person’s mental state by face, even from a distance on a screen or monitor, guides our behavior, judgments, and decisions. From voting to veto.

The brain networks involved in this ability are already known. But an important question that remains open is whether it is innate or acquired. Are we born able to infer feelings in others, or do we learn throughout life? Is it an additional resource that evolution has given our brains, or a socially acquired skill? There is recent evidence for the first hypothesis, in children who do not even speak, but are able to recognize facial emotions, preferring those conveyed by smiles to those expressed by staring.

If it is a biologically specific skill, then it must be universal, that is, the same among all peoples. There is also evidence that this is true. The result of this thinking will be that algorithms can be used to recognize emotional faces. With the support of cameras already located in many corners, they will not only be able to tell who we are, but also to assess whether we are happy, sad or angry.

See also  Human footprints from the Ice Age found in Morocco

Researchers at the University of California, in collaboration with Google, recently investigated this possibility, teaching computer programs to recognize facial emotions in 6 million video clips from 144 countries. The idea was to compare what is common in facial configurations that appear in different contexts (parties or accidents, football matches or disasters…), and rank the differences in pixel intensity in animated facial images, according to the corresponding emotions. The algorithm continually rated the depicted faces and their emotions, regardless of where they were captured.

The work has many ramifications with some ethical implications. For example: if a crime of aggression is committed somewhere and surveillance cameras are used to identify those involved, can we attribute to the suspects, by analyzing their faces, aggressive intentions? Another example: If the customers’ expression of a supermarket host is sympathy, can the manager be informed and use the information to promote him? To what extent can these analyzes be used – or should – be used in law enforcement, legal or commercial contexts?

A challenge to scientists. opportunity to work. The dilemma of the jurists. The neuropsychology of emotions and neurotechniques to determine the algorithm rely on scientific evidence, but they raise critical dilemmas. thinking.