Banner artwork by aniqpixel / Shutterstock.com
Every company has its eyes on generative AI. Bill Gates predicts that the technology will soon upend entire industries. McKinsey anticipates a host of changes from AI tools, while Boston Consulting Group advises business leaders to “consider generative AI ready to be built into production systems within the next year.”
Exploring the legal minefield
As companies think of ways to incorporate AI into their business, in-house lawyers are — as usual — thinking about risk. But with good reason. Since ChatGPT’s launch last November 2022, a steady volley of lawsuits have targeted the providers of generative AI technology. The plaintiffs are many and varied — from artists, authors, coders, stockhouses, to plain old website users. Their suits challenge both the methods used to build generative AI models as well as the outputs that the models produce. And their claims can potentially ground the AI rocket shortly after take-off. One copyright infringement lawsuit, for instance, seeks over US$9 billion from GitHub and its co-defendants. Another, claiming privacy violations, demands a “temporary freeze on commercial access to and commercial development” of ChatGPT.
Outside the courthouse, regulators and lawmakers are also poised to shake the legal status quo for generative AI. In April, a group of federal agencies — the US Department of Justice’s Civil Rights Division, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission, and the Federal Trade Commission — issued a joint statement declaring that they would use their existing authority to monitor the development and use of AI systems and enforce federal laws.
Across the pond, the European Union has already proposed the world’s first comprehensive law regulating AI. The draft law would require providers of generative AI models like ChatGPT to disclose that their content is AI-generated, design their models to prevent generation of illegal content, and publish information about training data protected by copyright. More laws and lawsuits are bound to come.
Legal uncertainty
The resulting statutes, regulations, and court decisions may make or break companies. Not just those developing AI technology, but also the many companies hoping to build and grow their businesses using these revolutionary new tools. If, for instance, a court or federal agency were to halt or limit commercial use of ChatGPT — or simply impose crippling damages — countless companies relying on the software or its API may see their business models undone. Understandably, many in-house counsel advising their clients on generative AI stress restraint.
Yet, clear guidance from courts and legislatures is likely not on the immediate horizon. The lawsuits challenging generative AI technology are largely at the pleading stage, and discovery will no doubt be drawn out. Even once the courts issue their decisions, the losing parties will likely appeal, potentially up to the US Supreme Court. And while some legislatures may soon issue regulations governing AI, global legal norms around the technology are likely also long in coming. The United States Congress, for one, is not known for its speed at lawmaking.
The cost of waiting
Though their lawyers may wish otherwise, companies won’t wait for the legal landscape to stabilize. Nor can they afford to. Established players and start-ups alike are rushing to integrate generative AI into their businesses, hoping to increase efficiency, cut costs, or otherwise gain a competitive edge. Companies with a wait-and-see approach risk falling behind their peers. Legal uncertainty, then, is something that any business using AI must accept rather than avoid.
But smart and successful businesses are skilled at managing uncertainty. And this isn’t the first time that industries have faced legal changes brought on by new technology. At the start of the millennium, record labels and artists battled peer-to-peer file-sharing networks and individual users over the scope of copyright law’s application to the digital world. Meanwhile, Apple developed and launched the iTunes Store. By cutting direct deals with the labels and other licensors, Apple largely sidestepped thorny copyright questions: They simply agreed to pay for licenses. And while the record industry prevailed in its suit against Napster, the licensing model was the ultimate winner. Even as digitization continues to raise novel copyright issues, many of the world’s leading businesses — from Netflix to Spotify — have built their success on licensing models that render many of those issues moot.
Similarly, many companies and individuals are now finding business solutions to mitigate the legal uncertainty around generative AI. Lacking clear legislative or judicial law, parties have turned to contracts and other legal tools to fashion their own rules for the new tech. Because contracts between private parties are by their nature nonpublic, it’s impossible to determine the extent of this “regulation” by contracts. But several signs suggest that it’s happening widely across industries.
The power of contracts
In March, for instance, SAG-AFTRA released a statement affirming that the union’s collective bargaining contracts would govern any use of AI to simulate talent’s performances; such “digital doubling” would need additional approval and compensation. A few months later the Association of National Advertisers (ANA) — the trade association for advertisers — updated its media contract buying template for the first time in five years, adding a new provision on artificial intelligence. The ANA template now requires media agencies to disclose AI usage to their advertiser clients and obtain client consent before using AI tools. And recently, LexisNexis — which has announced its own generative AI tool, Lexis+ AI — emailed its users a reminder that the company’s agreements prohibit uploading LexisNexis data into large language models (LLMs) and generative AI tools. One can reasonably assume that a web of similar contractual obligations is currently being spun around every company using generative AI today. And these are on top of whatever terms the generative AI providers impose on their own users.
The interplay of legal, business, and AI
These efforts to monitor and restrict usage of generative AI should not be surprising. While many businesses see the tech as a key to exponential growth, others see a threat to their existing business models: through its ability to create new content, generative AI has the potential to render traditional creators and licensors — artists, actors, data services, and others — unnecessary.
Or at least less necessary. These creators and licensors are accordingly looking to familiar legal tools like contract to restrict how others use their content and data, and to make new demands for that usage.
Of course, growing regulatory scrutiny into generative AI and the ongoing court disputes remain of critical importance. Any company using generative AI needs to monitor legal developments closely, in the United States and abroad. But until more legal clarity arrives, companies should also monitor their business partners. Businesses are naive if they expect that AI will let them capture value for themselves while others sit idle, or that it will simply be a contest among competitors to see who can best leverage AI. A business’s other partners — including its clients, vendors, employees, etc. — will be competing for that same value. And it is in their contractual dealings with those parties that they’re likely to see the most immediate legal impact to their AI plans.
Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.