A firm foundation of trust is a prerequisite for the bold route toward a sustainable, technologically enabled future that is brimming with potential. Trust in technology appears to be low in many societies around the world, owing to cultural perceptions surrounding new innovations.
Note that trust is not a unidimensional phenomenon. Trust has different sides to it, and the different forms of trust are important to distinguish so as to prevent jumping to conclusions and stereotyping. Academics Daniel Dobrygowski and William Hoffman provided us with a good framework to think about trust in technology.
In their capacities as respectively head of governance and policy at the World Economic Forum’s Centre for Cybersecurity and project lead for data policy, they distinguish mechanical trust from relational trust.
What is the Tech for Good movement?
The vision of the foundation of Tech for Good is that technology itself is not meaningful. It is the implications of technology that count. As humans, and more specifically as business professionals, we have a choice to make. I suggest that we decide to use technology as a force for good. We must work to restore trust in technology by using it to the benefit of people and the planet.
Mechanical Trust Vs. Relational Trust
Mechanical trust refers to trust in the outcome of technology itself, relating to the trustworthiness of the specific technology to function as it was intended. This is to be distinguished from relational trust, which considers the social norms and agreements behind the technology.
Relational trust takes a philosophical step further to measure the impact of technology on complex systems across societies.
Trust in technology itself, the mechanical trust, seems to be growing rapidly. Yet only a few years ago, people were leery of trusting algorithms. In 2014, researchers at the University of Chicago Booth Business School and The Wharton School of the University of Pennsylvania demonstrated that even when there was clear evidence algorithms could be trusted more than humans, people did the reverse and still trusted humans more, persevering in their resistance to trust technology despite the evidence.
What Is Algorithm Aversion?
This phenomenon was labeled “algorithm aversion.” With widespread societal reliance on and engagement with algorithmic advice, it is highly counterintuitive that we would be skeptical of it. But more recently, trust in algorithms has grown significantly and the trend has clearly shifted. Now, people trust algorithms and AI more than they trust human advice, especially in the face of complexity.
For example, a new study mentions that with the uncertainty created by Covid-19 changing who and what to trust regarding medical and financial advice, 83 percent of Indian consumers and business leaders now trust AI-based tools more than they trust humans. Interestingly, the same research study suggests 73 percent of business leaders trust AI bots more than themselves to manage finances.
People were more likely to adopt the algorithm-generated suggestion rather than follow the “wisdom of the crowd.”
The University of Georgia has also conducted novel research on the importance of algorithms in tech. For their study, the team involved 1,500 individuals evaluating photographs. The researchers asked the volunteers to count the number of people in a photograph of a crowd. The researchers then suggested a different number, either calculated by an algorithm or by a consensus of other people. Following that, each participant was asked whether they wanted to change their previous answer. The conclusion was that participants were changing their answers to match the algorithm output, whether it was correct or incorrect.
In most cases, people were taking the average of their original answer and whatever the algorithm said. As the number of people in the photograph expanded, counting became more difficult. Significant insights lie in how people were more likely to adopt the algorithm-generated suggestion rather than count themselves or follow the “wisdom of the crowd.”
These recent researchers have demonstrated that algorithms are indeed a reliable source that can be trusted. But does this mean people now trust computers more than humans? There may not be a conclusive answer, but widespread dissemination of tech for good will undoubtedly require both forms of trust.
The business world specifically will benefit from being aware of the importance of trust. Not only is trust a necessity for businesses’ license to operate from the public perception standpoint, it is also highly important for technology to be in the position to help solve our most pressing challenges.
How Can We Build Relational Trust?
Unfortunately, relational trust has never been as low as it is now. Therein lies a challenge. Trust in technologies’ intentions and impacts, and trust in the way this power will be used, fell to an all-time low in 2021. Many stakeholders favor tighter regulations on tech companies to make sure technologies are used for good. Tech companies indeed have an extremely powerful influence, and according to many, they are too powerful.
Pew Research Center recently found that in the U.S., 56 percent of Americans think major technology companies should be regulated more than they are now, and 68 percent believe these companies have too much authority and influence over the economy. Many people are extremely worried about privacy, fake news, cybercrime and more — especially in home devices.
A steady stream of controversies has dominated the conversation over how tech companies collect, manage, process and share massive amounts of data. Even with a commitment to privacy and governance, the executives and founders of these companies have not convinced people that surveillance is not an omnipresent threat to their basic rights and freedoms. Mistrust of governments and corporations causes people to take pause and reconsider how much faith they should put in both leaders and services directing these quickly evolving technologies.
More threatening instances are emerging, exemplifying people’s trepidation. Even in the city of San Francisco — with a tech economy where high levels of enthusiasm for digital infiltration may be expected — facial recognition services are stringently controlled to “regulate the excesses of technology.”
In their impactful book Tech for Life, Jim Hagemann Snabe and Lars Thinggaard support the concept that technology must serve humankind. They state that, although the objective to improve people’s lives and the planet through tech is valid, we must acknowledge that positive impact and “moneymaking” should at least both be addressed. But not only does the purpose of technology need to be refined, our entire public-private ecosystem around technology also requires repurposing.
In this regard, we will have to face difficult questions and dilemmas. Some key inquiries in Snabe and Thinggaard’s book include: How do we use data without losing privacy? How do we use platforms without creating monopolies? How do we use AI without losing control? Rightfully so, they conclude that to transform our world for the better, we need to encourage and inspire the responsible use of technology by finding a balance among profitability, sustainability and trust.
Excerpted from Tech for Good: Imagine Solving the World’s Greatest Challenges by Marga Hoek. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers.