Legal Tech: Integrate AI Compliance by Asking These Questions

Banner artwork by Camilo Concha / Shutterstock.com

Your sales leaders may groan. Marketing will roll their eyes. Business and product development teams might even allege you’re blocking once-in-a-lifetime opportunities.

But we all saw how a botched AI product announcement recently embarrassed Google on the world stage, leading to a single-day loss of US$100B in market value.  

Ensuring your company’s products use AI technology effectively and reliably, before promoting their capabilities, will help your company earn and keep the consumer trust Google lost so quickly. It will also help your company avoid running afoul of the new AI regulations being developed globally.   

By becoming part of the AI conversation, product counsel can embed AI compliance into the core product development processes for long-term risk mitigation. Below are four steps to get started.

1. Explain why proof is vital.

The Federal Trade Commission has warned US companies to be truthful and transparent about AI’s limitations and avoid making exaggerated claims about its capabilities.

The AI regulatory landscape will continue to rapidly develop as the technology evolves. Ascannio / Shutterstock.com

One option is to provide product development teams with guidelines or policies that outline the importance of waiting for proof and the potential consequences of making unproven claims.

Explain how false or misleading claims can lead to legal trouble and result in fines, lawsuits, and damage to the company's reputation. Informed stakeholders are more likely to fully understand and embrace the message.  

Companies are far better off waiting until a product’s capabilities are proven before officially disclosing and boasting about them.

Patience is rarely the popular or easy option. But companies are far better off waiting until a product’s capabilities are proven before officially disclosing and boasting about them. You may get a few groans, glares, and eye rolls. But the CEO and the board will smile when you can fend off a class-action lawsuit and avoid a public relations disaster because the public claims of your products’ AI-driven capabilities are truthful and accurate.

2. Consider developing an AI compliance program.

If there’s not one already, consider developing an AI compliance program before the next release of an AI-driven product. Doing so demonstrates a commitment to responsible AI development that helps build trust with customers, regulators, and business partners.  

The US Blueprint for an AI Bill of Rights offers guidance for US companies. Global organizations may look to the Organization for Economic Co-operation and Development‘s Principles on Artificial Intelligence

An AI compliance program is a critical risk management tool that aims to provide structure for staying up to date with evolving AI regulatory requirements — an ongoing task that requires reliable and flexible processes.

3. Ask the right questions to mitigate AI risks.

To assess whether an AI product is ready to be released, carefully inventory every instance of the product’s use of AI. 

An AI compliance program is a critical risk management tool that aims to provide structure for staying up to date with evolving AI regulatory requirements.

Then, by asking the questions below, engage your product development and legal teams to identify and mitigate potential risks associated with AI, such as bias, privacy, security, transparency, and accountability concerns.

AI model development

Fully document AI model development process.

  • How was this specific AI model developed?  
  • Who owns the model? Who trained it?  
  • Does the model include humans in the loop? Where and how? 
  • Can users tweak or customize the model? Will we allow other customers to access other customers’ customizations? 
  • Have test users raised any concerns about product malfunctions or service shortfalls? 

AI model datasets

Many concerns and liabilities relate to data ownership.

  • Who owns the dataset your AI model uses? 
  • Who owns the improvements to the dataset? 
  • Will we continue improving the model? If so, where will we get the requisite data?  
  • Do we use customer data? In what ways? 

Third-party components

Companies rarely build AI products in a vacuum.

First, identify all the third-party and open-source components in an AI system. Then, ask:

  • Are we complying with the applicable terms?
  • How are we assuring proper attribution?

AI privacy and security

Privacy and security are top of mind for customers, users, and regulators.

  • How do we protect the privacy of our customers and product users?
  • What are specific security measures in place?

AI ethical concerns

The use of AI should occur in alignment with the company's values and goals.

  • Does the product match the company's public AI statement or policy (if applicable)?
  • What steps were taken to ensure the AI does not result in discrimination or bias?  
  • Can the product be misused or cause monetary, property, or bodily harm? In what ways? 
  • Can the product be manipulated to spread misinformation? Promote deep fakes? Interfere with elections? Incite violence? 
  • What measures are in place to mitigate the risk of unintended consequences or negative impacts on users and society? 

4. Continuously monitor the AI product roadmap.

You don’t have to tackle all the questions simultaneously or in this exact order. Rather, incorporate them into the appropriate areas of the product development lifecycle to craft an integrated AI compliance program. 

Then, stay on top of how and where the company uses AI in its product roadmap and add more questions as more capabilities are revealed. This is how you can ensure transparency and truthfulness in your company’s AI conversations while mitigating risks for your customers and your company.

Tags