Will the social robots in tomorrow’s conscious cities rely on the same old personality traits?

In Brainwaves, Uncategorized by Kevin Bennett

Brainwaves Blog

Will the new social robots in tomorrow’s conscious cities rely on the same old personality traits?


New research hints that our human prejudices extend to physical and behavioural features of artificial intelligence (AI).  Cities are changing rapidly and it is now time to reexamine the personality nuances of human interactions with AI.

The emerging movement known as “conscious cities” is quickly gaining traction as a promising way of creating smarter, healthier, and happier urban areas. City designers, architects, and psychologists are beginning to incorporate features that quickly respond to the biological and psychological needs of human residents (e.g., reducing stimulation when city streets overload our cognitive systems, offering opportunities for interaction where people are isolated, etc.).  What role will artificial intelligence (AI) play in this new landscape?  And how will the synergy between human personality and robot personality shape our lives?

We are already living in a world where psychology researchers evaluate robot personality, robot group behavior, and other features of “robot psychology” using traditional techniques originally designed for human application only.  It should not be shocking that social robots (i.e., autonomous robots that communicate with humans by following social norms and rules) have personality traits just as social humans do, after all, these AI machines were designed by human engineers. What is intriguing, however, is that many of the perceptions we form and biases we hold in creating first impressions also apply to the world of robots.

Recent research (Reeves, Hancock, & Liu, 2018) suggests that we project personality characteristics on to robots based on physical characteristics, how they sound, and what function they serve.  In general, when we anthropomorphize, or give human-like qualities to an inanimate object, we feel emotionally closer to it. But this is only helpful up to a point because simply animating objects does not always work in the long run.  Remember “Clippy” the Microsoft Office Assistant (also known as the Microsoft paperclip)?  He was likable for several moments until he quickly became utterly unbearable.

So why not circumvent all of this and just build a robot that looks just like a human, but without the nuisance factor? One difficulty is that eventually you head into the uncanny valley.  This is a spot on a graph at which the robots seem so human-like it becomes eerie – think “Sophia,” the first robot citizen. Up to a point we prefer human-like traits in artificial creations until the robot becomes almost indistinguishable from humans. 

Measuring and Designing the Personality of Conscious Cities

As of 2018, a little over 1000 studies have been conducted on meaningful social robotic-human interactions.  According to findings presented at the 2018 Technology, Mind & Society Conference in Washington DC by Stanford University researcher Jeff Hancock, nearly all of these studies focus on one robot at a time. His research team, including Sunny Liu and directed by Byron Reeves, looked at 342 robots in a single study and asked people to evaluate the personality of each robot.

Modern day automatons come in all conditions – cute, furry, metal, mechanical, etc. – and this was the first known study to show all 342 “social robots” together.  Based on photographs, participants assigned personality ratings to robots. 

Falling Back on Old Personality Stereotypes

The Stereotype Content Model (Fiske, Cuddy, Glick, & Xu, 2002) proposes that across cultures people initially classify others along two dimensions of personality (warmth and competence).  This groundbreaking research helped psychologists understand how we form perceptions of others and is now being applied to the study of robot personality.

For example, a very talkative and physically cute robot is likely to be seen as friendly and approachable. Another robot might look strong and physically imposing.  Based on this, we ascribe personality traits. It turns out that the way in which we attribute personality to robots matches up closely with the way that we classify real people through stereotypes. 

When participants were asked to rate the robots along dimensions of warmth and competence, the researchers found that perceptions differed depending what the robots looked like. 

Left: “Buddy”, a kind and skilled social robot (Photo: Blue Frog Robotics) , Right: Dangerously skilled and efficient, but not very warm.

Part of the conscious cities ideology is the belief that people can experience psychological, biophysical, and cognitive changes often without noticing.  Human behaviour and mental processes can be influence by urban planning, interior spaces, and architecture. Similarly, robots, like Amazon’s Alexa or HAL 9000 from 2001: A Space Odyssey, can control physical and social environments and have an impact on human behaviour.

Design one robot personality?

For example, robots with a combination of warmth and competence may encourage exploration and feelings of trust. Robots in this category are perceived as friendly, agreeable, adequately equipped to accomplish desired goals, and are associated with desirable social partners. In anticipation of the critical role that AI will surely play in the design of tomorrow’s cities, now is the time to think about the synergy between human personality and robot personality. Do we want all social robots to be high on both competence and warmth – possibly leading city designs where there really is no such thing as “robot personality?” This combination appears to be the gold standard in robot design, but is that what is best for future environments?

Design robot personalities with as much variation as human personalities?

Another option is to try to create a distribution of robot personalities that closely mirrors the distribution of human personality. This would mean a conscious city with desirable, likable robots along with less desirable, inefficient robots.  Just like real people in real cities.  However, our growing adoration of social robots is forcing us to deal with an uncomfortable truth about ourselves: the prejudices and stereotypes we use on people are the very same ones we use to size up robots.

Match AI Personality with Function

A third option is to design robot personalities that conform to our expectations of function. For example, a social robot that engages with young children to encourage healthy dietary choices should be warm and nurturing.  A robot assigned to alert adults about walking in front of traffic should have a more forceful character.

Some social robots interact in ways that score low in competence but they make up for it in warmth.  These tend to be cute, cuddly, sometimes fuzzy robots.  Often with big eyes and child-like features, they do not seem overly competent but would be well-suited to provide emotional support when needed.

On the other hand, it is also possible for robots to interact combatively with citizens.  A conscious city, for better or worse, has the ability to use “hostile architecture” as a means of adjusting the behavior of specific groups within the population (e.g., city benches that prevent homeless individuals from sleeping or high frequency sounds that deter children from loitering). 

Some of the social robots evaluated in recent research were lacking in warmth, but they were perceived as very competent. These are often bulky looking robots who seem capable of executing a physically demanding task with ease.  Do we want robot personalities to match up with the activity they are associated with? Future research in this area should explore the outcomes of different AI personalities on behaviour within the community.

Conclusion

AI will undoubtedly play a role in the design of future communities and now is the time to think critically about the role of personality in these meaningful human and robot interactions.  Like any other design project, there are costs and benefits associated with different social robot scenarios.  Therefore, careful planning and discussion is necessary before going down one path or another.


References

Fiske, S. T., Cuddy, A. J. C., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878-902.

Reeves, B., Hancock, J., & Liu, S. (2018). The social dynamics of human-robot interactions: Generalizations from real people to a comprehensive sample of social robots.  Paper presented at the American Psychological Association Technology, Mind & Society Conference (April 2018), Washington, DC.