The E&C Pulse - February 26, 2019

February 26, 2019 LRN Corporation
 
october-11-pulse-banner.jpg
 
TOPICS OF CONVERSATION
 
story_header-40

Addressing the Growing Need for Ethical Risk Management

profile_pics_10

Reid Blackman is the founder and chief executive of Virtue, a company devoted to working with technology businesses to make sure ethics are a part of their risk management practices. He shares his insights on how ethical risk analysis can improve companies, and how ethical frameworks can shape development of artificial intelligence.

What is ethical risk management? In what ways does it differ from the type of risk management practiced at most companies?

Blackman: Today’s consumers and employees want to purchase from, and work for, ethical companies. Ethical risk management is about identifying the ways companies are at risk of falling short of that standard and devising strategies to shore up those vulnerabilities. This is not legal risk management. When companies go viral for the wrong reasons, or get beat up on Glassdoor, it’s often not due to a failure of legal compliance. It’s because the company prioritized profit over people, short-term gain over long-term sustainability, shareholders over stakeholders. Ethical risk management is about getting a business’s ethical house in order to guard against consumer and employee fallout.

What are the components of an ethical risk analysis? What can this tell a company? Once completed, how can they best apply the results?

Blackman: At Virtue our ethical risk analyses have two points of focus: organization and emerging technology products [such as] artificial intelligence, virtual/augmented reality, biotech). When we look at organizations, we look to see how well the ethical values of the company are defined, if at all, and the extent to which there are processes built around those values. When we look at emerging tech products we perform a systematic and exhaustive ethical risk due diligence process. Both approaches tell us where a company is vulnerable to running afoul of the ethical standards of consumers and employees. Not only that, they tell us what’s high risk, what’s medium risk, and what’s low risk. That makes the results easy to apply: tackle the high risk areas first by building processes that systematically address those vulnerabilities.

Why did you start Virtue, and why focus on the tech sector?

Blackman: I was an academic ethicist for nearly 20 years, but I was also a business owner--I co-founded and operated a fireworks wholesaling company for 15 years--and always wanted to combine my love of ethics with my love of business. But it wasn’t until a couple of years ago when I saw that ethics in business mattered in a way it hadn’t before. Millennials were making their voices heard with the power of social media and businesses were literally paying for it. I saw that being ethical for businesses is now a necessity to protect the bottom line, and once I identified the return on investment for ethics, I jumped in.

I focus on the tech sector because engineers are ringing the ethics alarm bells around a range of new technologies: artificial intelligence, virtual/automated reality, gene editing, and more. The opportunities are huge, but so are the risks, and businesses that run afoul of ethical behavior will lose consumer confidence and employee loyalty. Recognizing this risk, combined with the intrinsically fascinating topic of emerging technologies, naturally led to my focusing on it for Virtue.

What are the two things people should be excited about as it relates to artificial intelligence? Two things that should cause them concern? How can ethical frameworks and standards help tilt us toward the good benefits and away from negative outcomes?

Blackman: There are a lot of things to be excited about, and sometimes those are also the things that should cause concern. Self-driving cars are exciting, for instance. They have the potential to reshape cities, what it means to commute to work, whether car ownership will make sense for most people, whether travel can be safer than ever. But they’re also a cause for great concern, most obviously those relating to safety and the ethical “decisions” such cars will have to make when impending accidents force a tradeoff between the lives in column A and the lives in column B.

Perhaps the biggest concern with all of this, though, is the way artificial intelligence--or more specifically, machine learning--requires massive troves of data that are standardly collected without the meaningful, informed consent of those whose data it is. This is where massive breaches of privacy are already occurring and, if they go unchecked, will continue to spiral out of control. Consumers are largely ignorant of what’s going on, but after Facebook’s Cambridge Analytica scandal they’re wising up. Once they really understand, big tech may have a backlash.

One more area that’s exciting and cause for concern is automation. Artificial intelligence is increasingly allowing a range of tasks to be automated. As some people put it, AI takes care of the dumb, the dangerous and the dull. So AI might free people from those kinds of work. The cause for concern, of course, is mass unemployment as a result. AI will also create many jobs, but there is a huge problem with regards to determining who will pay for already-out-of-school workers to be retrained. Businesses won’t want to foot the bill, and we’re unlikely to see a governmental program-at least on the national scale--that fixes the problem. I actually think this is where some businesses can lead and have a big impact. The businesses that choose to take care of these workers by retraining them will benefit not only from the educated workforce, but also from the boost to their reputation for taking on the moral responsibility of caring for those whose jobs are eliminated by AI.

When it comes to an ethical framework, I prefer to think in terms of a due diligence process. At Virtue we created an ethical risk due diligence process that systematically and exhaustively identifies the ethical risks of emerging tech products. The businesses that are smart enough to look around the corner will also be the ones that avoid disaster.

You spent the first part of your career in academia, focused on ethics. How does the academic discussion of ethics differ from its practical application in companies? Can organizations use academic research to improve their ethics programs and initiatives?

Blackman: Academic ethicists often talk about fascinating questions that are irrelevant to businesses. For instance, an academic ethicist might ask, “Should we have X--facial recognition technology, robots that mimic human behavior, self-driving cars?” That question is irrelevant because it’s just a given that people are already employing, or will employ, those technologies. So the right question for a practicing business ethicist to ask is, “Given that we’re going to do X, what’s the way to do it that also mitigates the associated risks?”

I’m not sure organizations can use academic research to improve their ethics programs; ethics research tends to be pretty obscure. But I do think they can use academic researchers to translate that work into meaningful, transformative processes that mitigate ethical risks. I also think they can use those researchers as members of advisory panels. Academic ethicists are trained to think big picture and systematically about ethical issues, a skill that is hard to master. Businesses would do well to have their deliberations shaped by the input of ethicists.

What are two things organizations can do better to create ethical, values-based cultures?

Blackman: I’d say there’s one thing with two parts. First, they need an ethics statement that goes beyond legal compliance and the boilerplate corporate language of “trust” and “integrity.” Today’s ethical risks are about more than the former, and the latter is too “thin” to be actionable. Organizations need ethics statements with some meat on the bones. Second, and crucially, those ethics statements need to be translated into concrete processes and practices. Organizations can’t rely on employees to read that statement, have a change of heart and mind, and strive to live up to the values of the businesses. They need to create processes that are a party of everyday work life so that the values of the business are realized systematically. 

 

Ben DiPietro
@BenDiPietro1
ben.dipietro@lrn.com

 

 
MIND NUMBERS
 
mind_numbers_34

LRN’s State of Moral Leadership in Business 2019 report found 73% of respondents think their colleagues would do a better job if managers exercised moral authority instead of formal authority, while 72% think their company would be more successful in dealing with big challenges if their leaders had more moral authority. Eighty-two percent think their company is exposed to risks when it fails to consider the moral and ethical implications of its actions.

 

 
WHAT'S NEWS
 

A group of doctors asked the U.S. Food and Drug Administration to develop standards for the use of algorithms to help determine a medical outcome when they are put in medical devices, Axios reports. 

Deloitte explores the strategic risks confronting boards and CEOs.

How do you bring gender diversity to your board? Harvard Business Review has a playbook to help get you there.

Companies are using ethics and purpose to change the way they do business, and to better engage with communities and society, Ethisphere CEO Tim Erblich writes in LinkedIn. Ethisphere also is out today with its list of the world's most ethical companies.

Companies that sell genetic test kits that trace a person's ancestry are using the information in ways people may not be aware of, Axios reports. 

Marcus Buckingham and Ashley Goodall, writing in Harvard Business Review, wonder if all the emphasis on honest and sometimes harsh feedback is having the desired effect of improving employee productivity.

Jeffrey Skilling, CEO of Enron at the time of its massive fraud, was released from prison last week after serving 12 years, The New York Times reports.

Fortune released a list of what it says are the most overpaid CEOs in the U.S.

 

 
FROM THE
WORLD OF LRN
 
nyt-dov-event

Today at The New York Times' annual New Work Summit in Half Moon Bay, LRN Founder & CEO Dov Seidman explains how individuals can lead in the age of Artificial Intelligence (AI) by embracing Moral Leadership.

CLICK HERE TO LEARN MORE 

mlr-onesheet-thumbnail

The case for moral leadership is growing stronger. Results from our latest report, The State of Moral Leadership in Business 2019, reveals that leaders who exercise moral authority in addition to their formal authority are more effective at achieving business goals.

CLICK HERE TO READ MORE

SUBSCRIBE TO E&C PULSE
Follow:
 
Previous Post
The E&C Pulse - February 28, 2019
The E&C Pulse - February 28, 2019

In this edition, Ben DiPietro shares a conversation about truth and freedom between LRN founder Dov Seidman...

Next Post
The E&C Pulse - February 21, 2019
The E&C Pulse - February 21, 2019

This edition examines the seven traits associated with moral leaders, and how they often are excluded from ...

×

Subscribe To LRN's E&C Pulse Newsletter

First Name
Last Name
Company Name
Job Title
Thank you!
Error - something went wrong!