Skip to main content

Key takeaways:

  • AI might be a risk multiplier, but those risks aren’t necessarily new to privacy folks. Your pre-existing procedures are likely still applicable.
  • AI has amplified the need to have a robust privacy program in place. You need a strong privacy culture, enterprise partnerships, and an effective review and assessment process.
  • You should prioritize employee training and education to avoid the unintended sharing of confidential information.
  • Privacy by Design will be crucial as companies aim to develop in-house AI/ML. Privacy needs to be embedded in every AI system.

The data space is no stranger to hype and buzzwords; even so, new artificial intelligence technologies have certainly garnered unprecedented attention. 

Forrester reported that global enterprise spending on AI software is expected to reach $64 billion by 2025, up from $33 billion in 2021.

The heightened interest in AI systems has also drawn attention to many risks, mainly the potential for unintended bias and privacy violations. 

Companies are already experiencing the pitfalls of the expanded use of AI, particularly generative AI, in the workplace. In fact, Cyberhaven recently reported that roughly 11% of data employees put into ChatGPT is confidential. 

This was the case for Samsung. In April, news broke that the company discovered employees putting confidential information into ChatGPT, including private codes and transcripts of internal meetings.

As corporate enthusiasm for AI continues to rise, it’s clear data privacy leaders will serve a key role in risk mitigation. 

Let’s dive into how responsible AI guidelines will fit into your greater privacy program.

Adapt Your Policies But Don’t Try to Reinvent the Wheel

During a recent Data Privacy Board panel on industry evolutions, panelists discussed how they’re taking on AI accountability within their enterprises. 

Data Privacy Board panel
From left: Drew Bjerken, Andy Keller, Caroline Parks, and Xochitl Monteon.

Panelists agreed that despite the hype and relative newness of these systems, your approach to containing privacy risks associated with AI shouldn’t stray too far from your existing procedures. 

Your initial reaction to the buzz might be to develop an AI-specific policy, but it’s also likely your current guidelines already cover it.

Drew Bjerken, Vice President and Head of Global Privacy at Marriott Vacations Worldwide, said, “At the end of the day, AI is just another means of processing information. When we looked at it, we already had processes in place.” 

Panelist Caroline Parks, Senior Director and Corporate Privacy Counsel at Expedia Group echoed this thought and said that AI has really just amplified the need to ensure your current processes are working correctly. 

This includes having a strong privacy culture, enterprise partnerships, and an effective review and assessment process. 

That’s not to delegitimize the very real risks — the volume of data and media attraction could produce significant issues in the case of a misstep. 

Drew said, “I do see AI as a risk multiplier. And I simply say that just because people use AI to be more efficient, which means that it’s going to be a force multiplier. If you don’t do your upfront due diligence, it’s going to get out of hand and cause a bigger mess a lot faster than we’re used to.”

However, Drew also suggested that the fundamentals of those risks aren’t necessarily new to privacy folks.

Prioritize Employee Education and Training

While your pre-existing procedures are likely applicable to mitigating AI risks, panelists also agreed this newfound interest in AI does call for a boost in employee education.

The increased popularity of AI can be a risk in itself. More and more employees are turning to these systems to drive efficiencies in their roles — but using AI for these seemingly innocuous tasks can prompt considerable negative results. 

For example, in Samsung’s case, one employee used ChatGPT to create meeting minutes and overlooked the sensitivity of the information. The incident happened just weeks after the company lifted a previous ban on the model.  

Similarly, Drew brought up another AI application that builds PowerPoint presentations, and what employee wouldn’t want to reduce the time spent creating slides? 

But these unintended consequences are why panelists agreed it’s beneficial to remind and reeducate employees on existing policies. 

During the panel, Xochitl Monteon, Vice President and Chief Privacy Officer at Intel, highlighted the importance of “doubling down” on privacy training, especially around generative AI. 

Your workforce needs to understand that AI models must go through the same level of review and assessment as any other form of tech. 

Furthermore, Xochitl said it’s important for employees to understand how generative AI models like ChatGPT differ from datasets and other data models. 

To institute these reminders, Caroline at Expedia said they’ve also implemented splash screens to alter employees of the required processes and policies when they land on certain URLs. 

The Importance of Privacy by Design

Not only are large companies utilizing off-the-shelf AI systems like ChatGPT, but many are developing their own models. 

As they begin to build these systems, panelists agreed that implementing privacy by design principles will be critical in avoiding bias or discrimination and protecting sensitive data. 

In fact, Dr. Ann Cavoukian, known for creating the concept of privacy by design, recently highlighted the importance of these principles in the new age of AI. 

During an interview with The Privacy Whisperer, she said data protection must be embedded into every AI system. 

“I want you to have someone on staff who can look under the hood. You want to make sure that AI is working in a privacy-respectful manner. We need to know what it’s doing in terms of any personally identifiable data it comes into contact with.”

Dr. Ann Cavoukian

Xochitl also stressed the need for privacy to be embedded in new technology and shared how it’s benefited Intel, saying, “We’ve been a proponent of responsible AI for quite some time, and it’s allowed for tighter integration between those workflows and processes.”

Data Privacy Board members also discussed AI in a confidential conversation with Dr. Vlad Eidelman, Chief Technology Officer and Chief Scientist at FiscalNote, Board.org’s parent company. 

During the conversation, members were told as they aim to speak to AI projects, they should ask questions such as the current location of the data, whether it’s going to be shipped externally, and what kind of guarantees the third party will make about it. 

Another key point was to examine the potential for reputational risk in the event of a leak. 

Even for those enthusiastic about AI possibilities, you might need to question the need for large language model usage — there are problems that can still be solved with other, more risk-averse methods. 

Benchmark with Your Fellow Privacy Leaders

As AI continues to expand, enterprise privacy pros will play a fundamental role in protecting employees and consumers.

“I do think it is incumbent upon those of us in privacy to make sure we understand those use cases and make sure that, as best as possible, we’re not allowing bias to be built into any kind of those AI schemas.” 

Drew Bjerken, Marriott Vacations Worldwide

Still, when it comes to the rapid development of AI, there are often more questions than answers. Peer benchmarking is an invaluable tool in an evolving industry like data privacy. 

Interested in learning more about membership?

As a leader, your mission is important. We’re here to help you win.

Apply to Join