Skip to main content

Key takeaways:

  • Expansions in AI capabilities have required companies to broaden their approach to AI governance, particularly new generative AI bots like ChatGPT.
  • Employees’ direct access to ChatGPT could cause significant harm — privacy violations, copyright infringement, etc. — if it’s not governed.
  • Though some companies have banned employee use of ChatGPT, this could backfire and create “shadow AI.” Instead, companies should provide training and clear policies.
  • To ensure the ethical use of AI company-wide, many have implemented advisory committees or ethics boards.
  • AI regulation is coming to the U.S., though it’ll likely lag behind the EU. We could see a state-by-state approach rather than comprehensive federal regulation.

It’s safe to say the proliferation of artificial intelligence technologies is on everyone’s minds this year as corporate interest and excitement reach a critical mass.

AI and machine learning (ML) have long promised game-changing opportunities for companies, but today, executives are feeling some pressure to move quickly.

In fact, Accenture found that 84% of C-suite leaders say they won’t achieve their growth objectives, and three-quarters believe they risk going out of business in five years unless they scale AI.

Still, companies need to strike a balance between innovation and governance as consumers grow increasingly aware of the potential perils from privacy violations to unwanted biases.

What does the future of AI governance look like, and how can companies scale without exposing themselves to significant risks?

We gathered insights from expert data and analytics leaders.

Reacting to the Expansion of Generative AI

Large enterprises aren’t strangers to adopting AI and ML to drive efficiencies, but for the first time, their employees have gained direct access to these capabilities.

This year, OpenAI’s ChatGPT has exploded in popularity due to its capacity to generate human-like conversations.

The Guardian reported that ChatGPT hit 100 million monthly active users just two months after its launch, making the generative AI bot the fastest-growing application in history.

Furthermore, companies like Canva, Adobe, and others have introduced their own generative AI tools, and SalesForce even announced plans to integrate ChatGPT into Slack.

During an Enterprise Data Strategy Board panel on responsible AI in 2023, Kevin Petrie, Vice President of Research at Eckerson Group, touched on the issues caused by easy access to these models.

He said, “Consumers, for the first time, are interacting firsthand with AI bots in a way they weren’t before, and there’s a profound society-wide anxiety about AI related to bias and privacy.”

“Consumers, for the first time, are interacting firsthand with AI bots in a way they weren’t before, and there’s a profound society-wide anxiety about AI related to bias and privacy.”

Kevin Petrie, Vice President of Research at Eckerson Group

Ensuring the Safe and Ethical Use of AI

Employers are struggling to govern the use of generative AI, and the risks could be serious, from copyright infringement to major privacy violations. For example, Cyberhaven recently reported that 11% of data entered into ChatGPT by employees is confidential.

As a result, there’s a growing list of companies restricting employee use of ChatGPT; according to Forbes, Apple, Samsung, Amazon, and JPMorgan Chase have banned or limited internal use of the bot.

However, some say prohibiting employee access to these bots might not be the right solution. Neil Thacker, Chief Information Security Officer at Netskope, believes this approach could actually backfire.

He told TechTarget, “Banning AI services from the workplace will not alleviate the problem as it would likely cause ‘shadow AI’ — the unapproved use of third-party AI services outside of company control.”

Instead, organizations need to broaden their approach to AI governance to ensure employees are using these tools safely.

During the Enterprise Data Strategy Board panel, Natalie Heisler, Managing Director of Responsible AI at Accenture Applied Intelligence, echoed this thought.

“Generative AI is mediating a workflow, as opposed to traditional AI, which perhaps created a prediction or an output that we know and understand and can more easily interpret,” Natalie said. “It’s asking us to broaden the lens through which we look at that impact.”

“Generative AI is mediating a workflow, as opposed to traditional AI, which perhaps created a prediction or an output that we know and understand and can more easily interpret.”

Natalie Heisler, Managing Director of Responsible AI at Accenture Applied Intelligence

This means that not only do organizations need to consider the output itself but also how the human will interpret it and the steps moving forward.

There are steps companies can take to mitigate risk, such as creating a comprehensive policy around acceptable uses of AI, providing employee education and training, monitoring usage, and working with legal counsel to ensure compliance.

Some organizations have even implemented AI advisory committees to promote enterprise-wide accountability.

For example, IBM developed a central, cross-disciplinary AI Ethics Board to serve as one mechanism by which IBM holds the company accountable.

Keeping an Eye on Future Regulations

It’s clear that concern about AI and personal data is rising, and that concern has not escaped the attention of regulators.

The European Union proposed the Artificial Intelligence Act, a legal framework that aims to significantly bolster regulations on the development and use of AI.

Per the World Economic Forum, the proposed legislation, which could go into effect by the end of 2023, is primarily focused on addressing data quality, transparency, human oversight, and accountability.

We can expect AI regulation to come to the U.S. as well, though it may occur at a more gradual pace.

Notably, in June, a bipartisan group of legislators introduced the “National AI Commission Act” to create a committee focused on regulating AI, according to Forbes.

Despite the U.S.’s slower pace, Forbes highlighted how U.S. enterprises still react to the EU legislation, even when they’re not covered by the law.

For example, research showed that after the EU passed the General Data Protection Regulation (GDPR), U.S. companies still changed the way they collected and stored personal data.

Still, this slow movement toward AI regulation in the U.S. could increase the likelihood of a patchwork of state laws as opposed to federal regulation.

Today, cities and states have adopted laws related to specific AI use cases. For example, at the start of 2023, New York City adopted (AEDT) which restricts the use of automated employment decision tools.

Moving forward, different jurisdictions may continue to take varied approaches to AI policy.

Panelists touched on this idea during the Enterprise Data Strategy Board discussion on responsible AI and said that a more conservative approach may be the most risk-averse strategy.

In other words, if you’re compliant with the highest standard of policy, then you can ensure you’re largely meeting impending regulations.

During the panel, Aditya Anne, Director of Advanced Analytics and AI at CIBC, said, “In the end, they all follow the same set of principles; it’s always about privacy; it’s always about making sure your algorithms are fair and explainable. If you take a principle-based approach in your organization, you shouldn’t be worried about how these regulations are shaped out in different places.”

Benchmarking with Other Data Analytics Leaders

It’s clear that the AI world is moving at a rapid pace, and enterprise leaders need to stay nimble.

Overall, Natalie said this increased risk awareness, both among consumers and corporations, will only yield positive results.

She said, “Without understanding the risks, we can’t mitigate and control them. I think that with the knowledge in hand, we can make deliberate and responsible choices about where to set the guidelines and how to innovate responsibly.”

Keeping up with this evolving environment isn’t easy, but benchmarking with other enterprise leaders can help you gut-check your own strategy.

Interested in learning more about membership?

As a leader, your mission is important. We’re here to help you win.

Apply to Join