Interviews
How to Build Trust in Artificial Intelligence: An Interview with Jon Iwata
Published: February 3, 2024

Jon Iwata (Yale School of Management, USA)
The International Monetary Fund recently released a study predicting that artificial intelligence (AI) will affect close to 40 percent of all jobs. For some, it will be beneficial, boosting their productivity. For almost everyone else, their jobs are at risk. This report was published as business and political leaders from around the world prepared to gather in Davos, Switzerland, for the World Economic Forum Annual Meeting, where AI took center stage. Highlighting the apprehension around this “disruptive” technology, the response from governments has been surprisingly swift. A number of countries signed a declaration on the safe development of AI technology at the first global AI Safety Summit, which was hosted by the United Kingdom late last year. And we’re seeing increased regulation on AI around the world, including in China, the European Union, and the United States—the world’s largest economies.
As businesses across all sectors explore AI’s potential, they must also wade through its unknowns and navigate evolving regulation. In other words, they must innovate and use AI responsibly. The most recent episode of Brand & New, INTA’s podcast, featured a conversation between Heather Steinmeyer, INTA’s Chief Policy Advisor, and Jon Iwata. Mr. Iwata is an Executive Fellow at the Yale School of Management where he is co-leader of a program studying the leadership implications of stakeholder capitalism. He also directs the Data & Trust Alliance, a not-for-profit organization established in 2020 by CEOs of major companies, including American Express, Johnson & Johnson, Nike, Pfizer, Starbucks, and Walmart. The Alliance develops and promotes the adoption of responsible data and AI practices. Among his various accolades and accomplishments, Mr. Iwata is also the co-inventor of a U.S. patent for a nanotechnology and process for atomic-scale semiconductors. His conversation with Ms. Steinmeyer centered around the responsible use of AI and the role that data plays in establishing trust in AI.
Below is an excerpt from Mr. Iwata’s Brand & New podcast interview. It includes some minor edits to improve readability.
Skeptics have been wary of AI since its conception in the 1950s, which seems like a long time ago to many. Why, in your view, is it having its breakthrough moment now?
The concept of artificial intelligence is decades old. However, the attempts to create technologies to fulfill the concepts discussed a long time ago have changed quite a bit. Initially, there was a kind of brute force computational model that was just trying to come up with answers. There were expert systems that tried to codify how humans went about their expert work, and these were all systems that were deterministic. You were programming the system to operate. What’s different today is that these are not systems that are programmed. These are systems that learn and continue to learn. This is possible now because of the tremendous amount of data that is being generated every second of the day and the availability of computation and storage in the cloud. It’s the confluence of a new technique, machine learning, and deep learning. It is the phenomenon of data—which AI today absolutely requires—but it also creates a lot of things that have to be addressed very carefully and thoughtfully.
And yes, computational power is a factor, but it’s a bit of a misnomer to think that we now have these massive machines or supercomputers. The computational power is staggeringly more advanced than what we had in the past, but it really is the phenomenon of data and these new techniques of machine and deep learning that is causing this breakthrough.
The computational power is staggeringly more advanced than what we had in the past, but it really is the phenomenon of data and these new techniques of machine and deep learning that is causing this breakthrough.
Thinking about how we can positively embrace technology like AI, you co-founded the Data & Trust Alliance in 2020. It brings together leading businesses and institutions from across multiple industries to learn, develop, and adopt responsible data and AI practices, but with a focus on accelerated learning and implementation. This dynamic speaks to and perhaps even seeks to resolve the contentious nature of AI. On the one hand, we want to adopt AI as quickly as we can from a new technology standpoint. On the other hand, we need to do so responsibly. Let’s take the example of a brand looking to bring innovation into its product design, supply chain, or an enhanced experience for its customers. In practical terms, what does the responsible use of AI that is fostered by the Alliance look like for a brand attempting to make this type of transition?
Let me answer that question by illustrating it with the very first project that the Alliance chose to take on back in 2020. The Alliance was formed by CEOs of companies like American Express, General Motors, Johnson and Johnson, Nike, Pfizer, Starbucks, and others, because they shared a view or a conviction that AI was going to be the next basis of competitive advantage, and they did not want to be disrupted. Rather than being required through regulation or through the school of hard knocks—like making mistakes and having unintended consequences—they chose to take advantage of the technology and do so in ways that mitigate those bad effects and harms and instead demonstrate responsibility. Businesses don’t typically invite regulation, but they acknowledge that society has a right to pass regulation if businesses don’t regulate themselves.
The first project they wanted to focus on involved the use of AI in human resource functions. Walmart, a founding member of the Data & Trust Alliance, is the second largest employer in the United States. How many people apply for jobs at companies like CVS, Starbucks, and Walmart every day? Do we think that a human being is thoughtfully reading through every job application that’s being uploaded digitally?
No. HR functions have been using algorithms to help with hiring decisions for some time. They’ve been screening job applications for fit, making recommendations, and personalizing things for employees. Our first project was to look at how HR teams use data, algorithms, and AI for decision support as it relates to the entire lifecycle of employment, from the time you apply for a job, join a company, get promoted, get paid your benefits, and so forth.
We surveyed the HR professionals, the leaders of HR in these companies, and found that their greatest concern about the use of technology for their work was not privacy. That was number two. It was not security, which was number three. It was the perpetuation of historical biases in hiring decisions. Their concern was that if they used the technology, they would simply repeat historic mistakes into the future. They wanted to know if the technology was making it better or making it, in fact, perhaps worse.
We worked with HR and procurement professionals to create a set of what we called anti-bias algorithmic safeguards to properly screen the vendors that they were going to use to ensure that their suppliers and vendors had in place the proper management system controls and mechanisms to identify, mitigate, and monitor bias. That brings us back to what’s different about AI and our time. It’s the data. If you’re only using historical data to train these systems and to help you make those decisions, and that historical data tended toward hiring or promoting certain kinds of folks, if you don’t correct for that, that’s just going to simply continue into the future.
If a decision is being made that is assisted by or perhaps even made by technology, transparency would dictate that I, as a consumer … should know that technology has either made that decision or is assisting a human being in making that decision.
Businesses have to be able to trust that they can deploy an AI that isn’t going to get them into hot water from a regulatory or other perspective. Consumers also rely on businesses to be trustworthy in that sense. One of the ways in which that’s representative from a consumer standpoint is through trademarks and brands, for example. Brands represent consumer trust, and that trust must be earned. It can also be lost. In that context, can you tell us a little bit more from your perspective about how using AI without the foundations in place for trustworthy outcomes can be damaging? What does it look like to build trust in AI?
There are many different drivers of trust. One is transparency. If a decision is being made that is assisted by or perhaps even made by technology, transparency would dictate that I, as a consumer, as a job applicant, as a patient, as an insurance customer, should know that technology has either made that decision or is assisting a human being in making that decision.
Transparency gets rather deep. You talk to me about algorithmic decisions. Talk to me about the datasets. Talk to me about the training regiments. Now, this is all new vocabulary, and this is why a lot of the work of the Alliance is to enhance AI literacy. We do not have to become experts on data propagation and neural networks, but all of us as people need to have some grasp on how to ask the right kinds of questions and understand what a good answer is and what a poor answer is.
Transparency is a driver of trust, and related to that is explainability. Companies need to be able to explain to me why this AI is outputting this particular recommendation or answer. This is particularly important in critical decisions. If you think about a doctor or a radiologist or a loan officer who’s making a decision and looking to an AI system to help her or him make a good decision, they should interrogate that system. It shouldn’t just be yes or no. They should say, why are you making that recommendation? Explainability means that the system has to be architected and designed in such a way that it’s not opaque.
Finally, because these models are so dependent on data, you’ll hear a lot about data quality. But there’s no universal definition, even among data scientists, as to what data quality is. There are so many potential attributes and parameters of that. But our latest project at the Data & Trust Alliance was to try to resolve one aspect of it, which turned out to be data provenance.
You can think about provenance in sort of everyday life. We take for granted that somebody understands and knows with certainty the origin of our food or water or money. I can’t go down to my bank and deposit $100,000 in cash without having to answer a lot of questions about where that came from. It’s a necessary precondition of transactions that is based on trust. And yet, if you say in AI, what is the provenance of the data sets that train my model and that feed my model, you’re going to get a lot of different answers. We spent almost a year working with 19 different data science and compliance and all sorts of teams from across the Alliance companies to hammer out a set of data provenance standards that will be, we hope, to drive adoption for.
The Business of AI Conference will take place March 20–21 in New York City. Learn more and register for the Conference by March 18. |
Although every effort has been made to verify the accuracy of this article, readers are urged to check independently on matters of specific concern or interest.
© 2024 International Trademark Association