Skip to main content

Key takeaway:

  • Accountability and explainability are closely intertwined. Not only does the system itself need to be interpretable, but stakeholders at every level should be able to explain its decisions. 
  • The level of interpretability required will greatly depend on the use case. 
  • Because so many people contribute to an AI system, it’s hard to establish responsibility, but an AI ethics or advisory board could serve as a single oversight body. 
  • Promoting accountability is not about who to blame when things go wrong but who to call to make things right.

Emerging artificial intelligence technologies can offer seemingly limitless corporate possibilities, but we know those outcomes are not always favorable.

In fact, an International Data Corporation survey of global companies using AI solutions found that at least a quarter of them reported a failure rate of up to 50%.

Not to mention the dozens of high-profile examples of corporate AI disasters. So, what happens internally when an AI system fails to produce an accurate or fair result, and who should be held responsible?

Sanjay Srivastava, Chief Digital Officer at GENPACT, a professional services firm, told TechTarget that those who use AI cannot separate themselves from the liability or the consequences of those uses.

He said, “There will be large systemic disruptions and, therefore, large benefits to those who lead with AI. But along with that comes a responsibility to set up guidelines and transparency in the work you do.”

It’s clear that accountability can’t be an afterthought when developing these systems. Let’s dive into how enterprises can establish responsibility and ethical guardrails around AI and ML.

What does accountability look like

Beena Ammanath, who leads Trustworthy AI and Ethical Tech at Deloitte, addressed AI accountability in an article for Chief Executive Group. In the article, she demonstrates how accountability and explainability are closely intertwined.

Beena wrote, “Accountability means that not only can the AI system explain its decisions, the stakeholders who develop and use the system can also explain its decisions, their own decisions, and understand that they are accountable for those decisions. This is the necessary basis for human culpability in AI decisions.”

Not only is explainability crucial to assigning accountability, but research suggests it can also positively impact the bottom line. 

McKinsey & Company found that organizations that establish digital trust among consumers through practices such as making AI explainable are more likely to see their annual revenue grow at rates of 10% or more.

Yet, as AI capabilities expand and models become more complex, explainability becomes increasingly challenging to achieve. 

Beena said, “The challenge becomes that much greater when the AI tool’s complexity increases, the consequence of its decisions magnifies, and it is deployed at scale alongside dozens or hundreds of other systems.”

During an Enterprise Data Strategy Board panel on responsible AI, Natalie Heisler, Managing Director of Responsible AI at Accenture Applied Intelligence, explained that the level of necessary interpretability will depend on the use case.

“Your requirements for interpretability are going to scale according to the impact of the use case,” Natalie said. 

For example, inconsequential uses of AI, perhaps an employee using ChatGPT to write a social media post, don’t require a high level of explainability. Whereas an AI model used to grant a loan should call for greater interpretability.

“For some inconsequential uses of AI, you may have pretty low requirements for explainability and interpretability. When you’re making a decision on a loan or to grant a benefit, you’re going to have a high level of requirement for interoperability.”

Natalie Heisler, Managing Director of Responsible AI at Accenture Applied Intelligence

Undergoing an impact assessment can aid in this decision-making process.

Where does the responsibility fall?

So many enterprise individuals contribute to developing and adopting an AI tool, so in the case of a mishap, it can be difficult to determine who is accountable. 

Based on this, Beena explains that a “neat distribution” of responsibility is unlikely and becomes more problematic when leveraging AI models that change over time. 

She suggests that an advisory board could be well-positioned to promote accountability and provide oversight over the processes and penalties of AI applications.

For example, IBM developed a central, cross-disciplinary AI Ethics Board to serve as one mechanism by which IBM holds the company accountable.

Beena also highlighted the importance of enterprise-wide accountability, writing that employees at any level need to understand and embrace their own responsibility in the AI lifecycle.

Bring together key stakeholders

In order to establish cross-enterprise accountability, you need to bring together the right stakeholders. 

Stephen Sanford, Managing Director at the U.S. Government Accountability Office, told Harvard Business Review that you need to involve stakeholders at every point in the AI lifecycle — design, development, deployment, and monitoring.

He added that “The full community of stakeholders goes beyond the technical experts. Stakeholders who can speak to the societal impact of a particular AI system’s implementation are also needed.”

During the Enterprise Data Strategy Board panel, Natalie advised those listening to begin stakeholder engagement early on. She added that it’s important to establish a baseline level of data literacy and understanding of how AI reinforces the values of the company. 

“Because if you ask 10 people in the organization ‘what’s the definition of AI?’, much less generative AI, and you’re gonna get so many different definitions.”

With accountability and a better understanding of AI tools, an enterprise will see a greater level of trust in these new capabilities.

AI accountability is a work in progress

Based on the newness of much of this technology, there is a perceived lack of standards for the responsible and ethical use of AI. 

How enterprises establish accountability today can have long-standing impacts moving forward. 

Beena wrote, “Enterprises may be best served by not focusing on who is to blame when things go wrong but instead who to call on to make things right. With a clear articulation of who is accountable for what and to whom, the business is prepared to respond to AI outcomes and take real accountability for addressing errors with corrective actions.”

Setting standards in a rapidly changing environment can be a challenge, but benchmarking with other data analytics leaders can provide reassurance and confidence in your own strategy.

Interested in learning more?

As a leader, your mission is important. We’re here to help you win.

Apply to Join