Can a robot teach us empathy?


Empathy is a well-established concept in research that has taken a new direction in the last twenty years. Previously, work on empathy focused almost exclusively on understanding the phenomenon, either through the study of behaviors or through physiological and neurophysiological correlates.

Nowadays, in addition, artificial intelligence tries to reproduce and introduce empathy in physical and real entities, especially in robots, that carry out tasks characteristic of the human being. Given that machines will have social roles and will share environments with us in the future, it is essential to understand what kind of designs could benefit the social-emotional education of the children with whom they interact.

The perception-action mechanism

Empathy covers a wide number of processes, from the simplest and most automatic to the most sophisticated. The perception-action mechanism is relatively simple. It is present during the first months of life and has been implemented successfully in artificial intelligence. This mechanism allows one child to access the emotional state of another through their own neural and bodily representations.

From two years of age, the early development of perspective taking is observed. Children are at Level I when they understand that the content of what they see may differ from what someone else sees in the same situation. They reach level II when they understand that they and another person can simultaneously see the same thing from different perspectives. This type of empathy allows one to be motivated by social needs or those of other people in the absence of benefits for oneself.

Social robots follow a developmental path similar to that of human empathy. Three types of robots can be outlined based on the complexity of their components and the empathic process for which they are designed. Depending on the purpose, the context and the tasks that these machines are going to carry out, the critical factors of empathy that we implement in them can vary.

identify emotions

When talking about the development of empathy in early childhood education, the goal is for children to be able to identify emotions and verbalize them.

Type I robots are based on learning-by-doing procedures. That is, they learn the emotion by practicing it. The robot associates an instructor’s imitated or exaggerated facial expressions with its emotional-cognitive computational module to learn an empathic response.

Sometimes empathy is the result of a language-based cognitive network.

After learning, the robot can recognize the child’s emotional state through his expression, imitate it and label it verbally. In this way, in addition, it sensitizes the child to his own emotional signals and provides him with the necessary links in becoming aware between emotional responses and her subjective states. For example, you can say, “You’re mad, right?” or “I see you happy!”.

The similarities between the process of the robot and the child is that in both there are direct associations during learning. When the robot or child perceives the other’s empathic cues, they feel the associated emotions only if they match past experience. Social robots can have episodic memories with associated emotions and use them to “feel” the current situation.

Thus, placed in an interpersonal situation adapted to his age, the child can learn the emotional gesture and posture, to name an emotional state, to specify what he is thinking about and to show the face that suits others. His emotional activity, in turn, is conditioned by what he receives from the context. The child is at the same time an actor of emotional expression and an observer of the emotional effect that he produces.

In this situation, he gradually discerns the meaning of these emotional actions, and his attention can focus on the effects of his own emotional activities and those of others.

empathy relationships

Sometimes, the goal is not to provoke in the robot similar or congruent reactions to the child’s emotional state. To establish a long-term empathic relationship, a social robot may need to display moods and emotions that vary over time.

Type II social robots are designed to develop a general emotional state with experience. In this way, the robot can respond and adapt to children’s emotional expressions while developing its own “mood”.

The contingency of responses and the unpredictability of behavior are factors that manage to imitate human behavior more naturally and favor more positive interaction and learning between the boy and the robot.

perspective shot

Perspective taking is considered the most advanced cognitive process among empathy processes. It consists of imagining the perspective of the child and suppressing that of the robot. This process, along with language-mediated association, is what many researchers call cognitive empathy.

The result of perspective taking does not necessarily imply emotional ties to the child. A type III social robot could project imaginary scenarios built from empathic cues, the child’s context and history, anticipate her behavior, show concern and suggest new response alternatives.

This capacity for projective imagination would allow the child to detect relevant cues in different social situations and anticipate their own and others’ action patterns. However, much more technological advancement is required for a social robot to make attributive judgments to identify the causes behind a child’s thoughts, feelings, and characteristics.

Currently, type I empathic robots are appearing in the industry and have shown their effectiveness in the clinical setting. Most empathic research on social robots is slowly moving toward Type II robots, and as research precedes commercialization, expect to see more research on Type III robots in the near future.

Maria Isabel Gomez LeonProfessor in Neuroscience, Camilo Jose Cela University

This article was originally published on The Conversation. read the original.



Leave a Comment