Artificial intelligence technologies promise big gains for your enterprise, but as a data strategy leader, you know those opportunities don’t come without risk.
AI has the potential to improve productivity, unlock new insights, and transform business operations. Still, there’s increasing concern regarding the risk and potential harm that could arise when proper safeguards are in place.
KPMG’s 2020 The Shape of AI Governance report states, “While trust has long been a defining factor in an organization’s success or failure, the risk of AI now goes beyond reputation and customer satisfaction — it is playing an outsized role in shaping individuals, future well-being even as few inside or outside the enterprise fully understand how it works.”
In that report, 94% of responding IT decision-makers felt that organizations needed to focus more on corporate responsibility and ethics when developing AI solutions. However, the study also indicates there’s a lack of standard best practices and government regulations surrounding the use of AI.
“Businesses around the globe find themselves choosing between speed to market with AI-powered solutions and building comprehensive and foundational AI governance capabilities.” – KPMG’s 2020 The Shape of AI Governance
To mitigate risk and unlock the full potential of AI technology, enterprises need a comprehensive AI governance model to enable trust, accountability, and transparency.
What is AI governance and why do you need it?
Given the expansion of AI capabilities, it’s necessary to define what AI governance is and what it will mean for your enterprise.
TechTarget defines AI governance as, “an overarching framework that manages an organization’s use of AI with a large set of processes, methodologies, and tools.”
Still, Kashyap Kompella, CEO of rpa2ai research — a global advisory firm focused on enterprise automation and AI — said the goals of your governance model should span beyond just ensuring the effective use of AI technology, but should encompass risk management, regulatory compliance, and ethical usage.
At a large organization, there is likely already mature data governance or IT governance models — so why do you need AI governance?
AI is often referred to as a “black box,” and according to TechTarget, the “why” behind AI deep learning decisions is not easily understood.
Real-world data greatly differs from the data used to train AI models, and changes often occur that make the patterns and relationships learned by an AI system obsolete.
As a result, a 2021 Gartner report suggests new AI technologies require operational controls that are generally not well understood across the corporate landscape. The report states that AI risks are not adequately addressed in most organizations due to organizational fragmentation, a production-first mentality, and a perceived lack of need.
This claim is supported by a FICO and Corinium report titled “The State of Responsible AI,” which found that most enterprises leverage AI at significant risk. In fact, 65% of surveyed companies could not explain how specific AI model decisions were made and 73% have struggled to get executive support for responsible AI practices.
Developing principles to ensure accuracy
In the absence of widespread regulatory requirements, enterprises should still implement AI principles and policies to establish control without stifling innovation.
KPMG recommends designing and implementing an end-to-end AI governance and operating model that spans the entire AI life cycle, including strategy, building, training, evaluating, and monitoring AI.
AI technologies operate in an ever-evolving environment and as a result, it’s not uncommon for discrepancies to arise between the training and test stages — known as dataset shift.
Enterprise Data Strategy Board Member CIBC, or The Canadian Imperial Bank of Commerce, has deployed principles to identify these differences as they occur. According to The Vector Institute, CIBC’s Advanced Analytics and Artificial Intelligence team cover projects including improving client experience and reducing fraud, meaning there are numerous areas where a dataset shift can occur.
Ali Pesaranghader, former CIBC Senior AI Research Scientist said, “Data shift may potentially appear in forms in client experience, product acquisition, ATM cash demand, fraud detection, treasury deposits, amongst other applications.”
To keep their AI models accurate, the enterprise developed a workflow to detect dataset shifts and correct them by adapting algorithms to restore model performance.
Learn how enterprises are deploying AI governance policies
When it comes to operationalizing AI governance, it can be beneficial to hear what’s worked and what hasn’t for other large-scale enterprises.
Ozge Yeloglu, Vice President of Advanced Analytics and AI at CIBC, will provide a behind-the-scenes look at how the organization built a hub-and-spoke AI governance model and what they’re envisioning for the future in a confidential Enterprise Data Strategy Board conversation.