Do we trust humans or machines?
Prof. Dr. Anne Scherer, Assistant Professor at the University of Zurich answers.
The question sounds simple at first, but I am afraid the answer is not. Our trust in man or machine depend on a variety of factors:
First and foremost, it depends on the individual. Each and every one of us has different ideas or lay theories about what a machine can or cannot do well and what a human excels at. Many of us believe, for instance, that machines are especially good at finding patterns in huge sets of data or making complex computations; while humans are good at being creative, empathic or accounting for soft or more subjective factors. Next to our idea what our counterpart is good at, we also have a certain idea about what we ourselves are good at. Clearly, the more knowledgeable we consider ourselves about a certain topic, the less we trust machines or algorithms - but the less we also other humans.
Which brings me to my second point: the task or situation also influence our trust in man vs. machine. As explained earlier, we have certain lay theories about the capabilities of machines, which are also largely influenced by our experiences. Many of us may have encountered an algorithm when it comes to investment portfolios and – unsurprisingly - we may also be more trusting when we now encounter an algorithm that estimates our credit risk and decides if we should get a loan or not. On the other hand, we may never have encountered a machine in a medical context before and also may question if an algorithm or a machine can give us the unique attention we desire in such a context. As a result, we may not easily trust an algorithm that comes up with medical diagnosis and a treatment plan – although algorithms may very well perform just as well – if not even better – as their human counterparts in these tasks.
Which brings me to my third and final point, which is also focus of our research: The design of human and machine interactions. Today, the boundaries between human and machines are becoming blurred. Humans increasingly rely on technology when interacting with each other; and machines become increasingly humanized through social cues and anthropomorphic design features. Ultimately these design choices affect our perception of these machines and can thus unconsciously alter how we interact with them or much we trust them in a certain situation. For instance, it may make sense to introduce human features if the situation requires us to trust that I am individually attended to; whereas my research shows that it may make sense to avoid a humanization if the situation demands an open disclosure without having to fear social judgments. As these examples illustrate the design of a machine can have a great impact on our trust and behavior and thus needs to be considered carefully when they are employed.
And I hope with this I could give a brief glimpse into a very complex topic.
All AI Insight Videos here.