Banner artwork by kora_sun / Shutterstock.com
In a recent facilitated discussion, ACC members talked about how they use AI, what they are worried about, and how they are moving forward. The dialogue was conducted under the Chatham House Rule so no speakers nor organizations are identified.
Lots of potential, for good and ill
The conversation underscored the enormous opportunities artificial intelligence presents for businesses. AI capabilities are increasing at a staggering pace; large language models will get 100 times better than the current offering. But as a society, we need to align superintelligent AI and human needs. The last thing we need is AI “going rogue,” as the ChatGPT’s creator recently said.
There is no roadmap available. This is trial and error and seeing what works for your business.
Create a task force with the IT and compliance teams
ACC members discussed aligning the legal department with the IT and compliance teams in their organizations to create an internal playbook for the contract review process for any outside vendor utilizing AI. This team also published guidelines for the organization for the data use process for AI technologies.
ACC members discussed concerns about implicit bias, particularly in the use of generative AI. They also shared concerns about any customer facing products utilizing AI, including chatbots.
AI use for creative projects
An ACC member discussed the discovery of AI generated content having been unknowingly utilized in creative intellectual property development. They also discussed the potential trademark and copyright issues involved.
ACC members said they use AI to expedite contract review. By training it to only process contracts that meet certain requirements, AI flags those that contain substantial deviations.
Another successful use of AI involved a company that had multiple contracts across different jurisdictions and when it was appropriate to change jurisdiction.
Another member asked if AI has the ability to make comments into a contract, as this member often used the same comments to redline contracts again and again. While no one knew if this was possible, they suggested asking an AI vendor to look into the capability.
Negotiating with AI technology vendors
When looking to contract with an AI vendor, understand the functionalities you want in a solution. The more customization, the more expensive the solution will be, generally speaking.
One of the challenges is negotiating indemnity provisions, because this area is so new. Get prior right of approval of unilateral changes of terms by technology vendors.
Zoom was recently in the news because its terms of service update appeared to provide access to users’ data for AI training. It clarified its service terms in a blog post after the backlash. The episode underscores the importance of knowing if the vendor will use your data to train its AI. It may be worthwhile to check your Master Services Agreement to see if any vendors use your company’s data to train their AI.
OpenAI now allows internet users to block its web crawler from scraping data to train GPT models.
For vendors that are deemed data processors, make sure they sign a Data Processing Agreement, which places restrictions on what they can do with Personally Identifiable Information.
Formal company policies are on the way
Several members said they are working on formal company policies to address AI. Some mentioned looking at independent contractor agreement and amending it to specify that they cannot use AI to create content. Others noted to have humans review any code created by AI.
- Approach this responsibly and remind people of their ethical obligations to their company and to themselves.
- Be mindful when negotiating with an AI vendor and understand your risk appetite, and their process for changing their T&Cs.