Legal Tech: How to Evaluate AI Vendor Risks

Banner artwork by Mameraman /

AI is exciting — and risky. No one has all the answers yet. But, we still need to ask questions.

Courts, legislators, and regulators will define AI's boundaries for years to come. Meanwhile, you want to help your company avoid leading a costly lawsuit that becomes the litmus test in a landmark AI decision.

That means proactively asking targeted questions about the AI-driven products and services your company buys from third-party vendors. Until we have complete regulatory guidance, consider asking AI product and service vendors the critical questions below to help you navigate potential risks.

Carefully vet AI providers to mitigate potential future risks. Artwork by Pasuwan /

Get to know the ethics of AI-driven systems. 

Start by reviewing the vendor's AI policies and procedures. Learn how they prioritize ethical considerations when developing and deploying AI-driven systems.

Questions to ask AI vendors:

  • What’s your philosophy on ethics and AI? Do you have a public AI statement or policy you can forward to me? 
  • Can you confirm that your AI models were trained with properly licensed content? 
  • Can your AI tools be used to cause harm? What steps have you taken to prevent that from happening? 
  • In what ways could your AI system produce inaccurate or misleading results? How do you address those issues? 
  • Does your company face any pending AI-related litigation, threats of litigation, or regulatory inquiries?

If vendors use data in ways that violate privacy laws or regulations, your company can suffer significant legal and reputational damage.

Prepare to collaborate with others.

To meet any future AI regulations, as well as any guidelines set by clients, internal stakeholders, or others, you’ll need to understand the ins and outs of your vendors' AI processes.

Questions to ask vendors: 

  • What dataset did you use to train your AI model? Who trained it? 
  • Why did you choose the training methods you used?   
  • What quality control procedures do you use?  
  • Will you periodically retrain or continuously update your model? If so, on what data?  
  • Is the system auditable? Can we customize the audit process? 
  • Do generative AI models include links to citations to verify the case law and information it references and to support its assertions, i.e., to demonstrate that it isn’t hallucinating? 

Identify potential sources of privacy and security violations. 

AI tools may collect and process large amounts of personal data that must be secured appropriately. If vendors use data in ways that violate privacy laws or regulations, your company can suffer significant legal and reputational damage. Hackers can exploit vulnerabilities to access sensitive data and disrupt operations.  

Questions to ask vendors: 

  • How do you test for bias and accuracy in your AI model? 
  • What is your track record concerning bias and accuracy? 
  • Do you keep humans in the loop to review your AI model’s results? 

An interconnected world means additional parties. 

Partnering with an AI vendor carries the same inherent risks as partnering with any vendor. The level of risk depends on how much access a vendor has to your data and systems. That risk is amplified when a vendor partners with additional product or service providers.

Questions to ask vendors: 

  • What AI use is on your product roadmap?  
  • What is the timeline for rollout?

And don’t forget the time-honored catch-all question:

What else should we have asked you about your AI? 

Questions are currently your best tool for vetting vendors of AI tools. Actively seeking answers during vendor selection helps avoid AI bias, privacy violations, security breaches, and other potential legal liabilities. Carefully vet AI providers now to deploy AI safely and help ensure organizational processes remain reliable, ethical, and compliant with legal and regulatory requirements when they arrive.