Monday, July 10, 2017

What I learnt about research on human trust

Last week I went to the Air Force Academy in Colorado Springs, United States to learn about research that is being done on trust. Mostly for the benefit of my future self, I will make an attempt at summarizing what I learnt about this somewhat foreign (to me) area of research. Trust is actually measured in lots of different ways, ranging from tightly controlled lab conditions to the messiness of the real world and foreign cultures. Trust models come larger from Roger Mayer. Fundamental components of trust appear to be ability, benevolence and integrity. In other words, you trust someone when you believe they can do things (ability), when you think they are meaing well (benevolence) and when you believe they act with integrity. Trust is most crucial in situations of risk, and when you are together with someone you trust, you are more willing to take risks in this relationship. Mei-Hua Lin discussed that trust depends on the amount of interaction you have had with a person, the similarity, affect, status, as well as situational factors. Mayer mentioned an interesting experiment to measure trust in a person, is to ask people how likely they would give ths person a project that is important to them when you cannot monitor them. Across the world, the integrity dimension appears to be the most important predictor of trust. Although mostly trust is considered to be positive, Alan Wagner is studying situations in which people overtrust. Most frequently people use Mayer's questionnaire for measuring trust, but another possibility is Rotter's Interpersonal Trust Questionnaire.


Trust in the laboratory can be related to confidence. It has been known for some time that confident testimony has greater influence, especially when it comes from people that also calibrate their confidence to their probability of being correct (Tenney et al., 2007). As such, you can examine trust in the laboratory by looking at how people take the advice of advisor that vary in how confident their advice is (see some cool new work by Yeung and Shea). It is apparently even possible to create computer models of trust, which update trust in an opponent on the basis of previous experiences (Juvinaet al., 2015). One interesting context in which these models were used were in peer-assisted learning of paired associates, in which your partner can inform your answers to the paired associates. In a slightly less cognitive lab setting, trust can be assessed by looking at people's facial expressions as they perform a task collaboratively (Social BART task). Even more, humans can extract trust from body odors, although this effect is modulated by gender. Extraction of social information from smell is also disrupted in people suffering from autism spectrum disorder.

Another dimension of trust occurs in teams of humans and robots collaborating. Antonio Chella thinks about whether recovery of trust can occur when we let a robot say "sorry". You can also look at how humans trust automation (e.g., in a factory) and look how often they notice failures of this automation, such as in the AF-MATB task. Apparently errors by the automated system can even elicit an error-related negativity ("oERN"). As the machine/factory makes more errors, people evidently trust it less. So in fact the reliability of one artificial agent affects how reliable we think another agent is: trust calibration.

On the other hand, do humans consider machines in the same ways as other humans? Jonathan Gratch looks at what aspects of robot behavior make us treat the robots like humans vs machines. Appararently the relevant dimensions are a sense of agency and displays of emotion--together he calls that mind perception. Humans treat robots unfairly and exhibit different emotions when they feel they are just machines. When you add emotions to the robot, people start to treat it more human-like. Apparently you can even decode from human brain activity whether people think they are dealing with humans versus machines. Also gaze is an important cue that humans use to decide whether to trust a robot. Angelo Cangelosi uses investment games to study how much people trust robots, and observed that people invest more in nice than in nasty naos. Amazingly enough, even rats prefer helpful robots over non-helpful robots! Also team interactions can be modelled with ACT-R, as Chris Myers' work on synthetic team mates shows.

Slightly less related to trust, but more to influence was work from Matt Lieberman, who showed that activity in the mPFC could predict behavior change in many contexts such as smoking cessation, wearing suncreen and more. Now what happens between two people are they are succesfully influenced? In experiments at Mount Jordan, Matt Lieberman showed that people's brains are more synchronized when they are watching a video together and are engaged and share a common reality. Also synchrony in speech (speech entrainment) can create social connectedness, because it is associated with increased positive feelings. However, this is not a simple phenomenon, because apparently it's not just more entrainment is better; rather, more variation in entrainment is better. The amount of speech entrainment seems to even affect whether people take advice from an avatar, although that is again a messy process. Less biological ways to measure connectedness include a questionnaire of social presence, which Kerstin Daubenhahn found to be sensitive to whether robots synchronized to the interaction with humans or not.

Other very interesting work by Clara Pretus looked at what is different in the brains of people who are wlling to fight and die for sacred values compared to people who don't. The main difference seemed to be less reliance on the dorsolateral prefrontal cortex for making these kinds of decisions. On a more positive note, very interesting work by Daniel Fessler showed how watching brief videos of prosocial behavior promotes real-world prosocial behavior (donations). The emotion of elevation appeared to be driving this real-world behavior. An important determinant in video content appeared to be reciprocation between the actors. Other happy news is a study by Adam Cohen who showed that when you ask people what kind of fictitious characters they would friend on Facebook, they trust Muslims and Christians equally, and the people they find most trustworthy are those who engage in costly religious practices (such as adhering to a kosher diet).

On a larger scale influence can be measured on twitter. People such as Vlad Barash have been developing network methods to study social contagion on this social media platform. Tim Weninger showed that social rating systems have a huge influence on how much other people like images/posts: to the extent that people are very poor at predicting what image will be more popular on social media, and popularity ratings are driven primarily by other users' ratings. In short, trust and influence are highly complex topics, on which very multidisciplinary research is done from many angles and perspectives.

Some useful tools I learnt about: