Culture and Trust are Keys to Confronting AI Challenges and Opportunities

Artificial intelligence is a part of our lives in ways thought unimaginable a few years ago.

AI allows doctors to implant devices in people to reduce the number of seizures they suffer; lets farmers grow food in inner-city warehouses in a way that is faster; more efficient and less harmful to the environment; and permits companies to design systems that take advantage of the skill sets machines and people have to improve processes and products.

The worry, of course, is in the wrong hands it can be used to create havoc in society, sow fear and dissension, and bring to life humanity’s biggest worries about a world where the robots are in control and humans are beholden to them.

Trust is the key to creating a positive vision for AI, a way to allay people’s concerns and show them the value of machines and the way they can enhance the work people do, said Paul Daugherty, chief technology and innovation officer at professional services firm Accenture Plc, and co-author of the book “Human + Machine: Reimagining Work in the Age of AI.”

Trust is a massive issue for us to deal with,” and will require significant engagement from organizations with their employees and constituents, Daugherty said Wednesday in the latest in our series of #HOWMatters conversations with LRN CEO Dov Seidman.

Paul Daugherty (left), author of "Human + Machine: Reimagining Work in the Age of AI," discusses with LRN Chief Executive Dov Seidman (right) on Wednesday the implications for humanity as artificial intelligence becomes an ever-growing facet of our lives as part of LRN’s #HOWMATTERS thought leadership series.

For example, a patient requires trust to have a medical device implanted in them, and the patient trusts the doctors to use the person’s personal medical information in a responsible, ethical way. Consumers require trust that when Amazon.com Inc. or another retailer asks for their house keys to drop off a package, they are assured the delivery person won’t steal or snoop around.

To help foster such trust, Daugherty said that means reminding people they have control over the machines, since they can program them to dictate how AI and data are applied. “AI won’t do anything we don’t program it to do,” he said. “Humans are at the center of it.”

It’s important to remember humans have a “moral capacity” that is lacking in machines, and the use of “moral muscles” will become more necessary as AI becomes more prevalent in our daily lives, said Seidman.

Creating a responsible framework for AI means designing morality into it from the beginning, and not trying to layer it on after the fact, said Seidman, emphasizing the need to “pause,” ask deeper questions and consider the use of AI through a larger lens on how it can affect humanity.

Empathy, compassion--machines will never have that,” said Seidman. “We won’t have humans at the center unless we educate from the heart.”

Both Seidman and Daugherty talked about the need for leaders to create cultures that get organizations to realistically address people’s fears in a straightforward manner, while empowering people to apply the human qualities that can’t be replicated by machines and work together with the machines to create the better world such technology promises.

We are going to need leaders who can build these cultures, that is going to be essential,” said Daugherty.

Embracing innovation, getting your team to be comfortable with change, encouraging risk-taking and putting people at the center of any AI initiatives are key elements to creating the proper culture for these challenging times, said Daugherty.

The best way to confront fear is to embrace it, said Daugherty, because as he put it: “Technology is not going away. Prepare yourself for it.”

To listen to the audio recording of this #HOWMatters series webcast, click here.