Banner artwork by Andrey_Popov / Shutterstock.com
Lawyers are currently litigating the boundaries of AI usage. At the same time, dedicated technologists continue to explore and develop more AI capabilities. Going forward, more people will use AI to live, work, and play — the attendant legal and ethical concerns will also evolve.
We can’t turn back the tide now, nor should we wish to. Lawyers at the leading edge can take part in proactively redesigning our legal system to reflect the impact of modern technological advancements and ensure we use AI responsibly and ethically.
Just what are AI’s recent advancements and how are they impacting our legal system?
Lawyers have used AI to automate tasks for years.
AI is a multi-purpose technology that makes processes faster, more consistent, and more cost-effective. Many lawyers have relied on AI in their personal and professional lives for years. AI techniques such as grouping, concept analysis, single- and continuous active learning tools, decision-tree algorithms, help lawyers:
- Organize and search through large volumes of documents
- Identify patterns in legal documents and cases
- Automate legal research
- Draft contracts and other legal documents
- Automate manual tasks such as approval workflows, data collection and entry, and scheduling
For example, AI can help users automatically generate new contracts and review incoming ones to identify potential risks and compliance issues. AI helps to automate contract review and versioning, e-signature, and renewal processes to reduce the time and cost of contract management.
This type of task automation frees up time for higher-value work. AI can help lawyers make better decisions, prepare for cases more effectively, and avoid potential ethical issues. However, that is far from all AI can do.
Hybrid AI creates more powerful systems.
AI has moved beyond task automation to become an even more integral part of our lives. Hybrid AI combines multiple technologies and techniques for more powerful capabilities. For example, combining natural language processing (NLP) with machine learning (ML) can create a system that understands and responds to user input.
Hybrid AI is used widely, across industries, to support the underlying workflows in modern systems. Hybrid AI examples include:
- Autonomous vehicles use a combination of sensors, cameras, and AI to detect obstacles and navigate roads safely.
- Chatbots use NLP to understand and respond to user input.
- Predictive models use ML to anticipate customer behavior.
- AI-powered facial recognition systems are used for security purposes.
Legal questions surrounding AI use
Just tapping the surface of the legal consequences of AI use drives us to ask critical questions like:
- Who is responsible for the decisions AI makes? The developer? The user? The owner of the data?
- Must lawyers be able to explain how algorithms work?
- How do we mitigate the potential for data misuse and bias or discrimination in systems?
The US has yet to develop an official regulatory framework for AI. In late 2022, the FTC published an advance notice of proposed rulemaking addressing commercial surveillance and data security issues. It includes a comprehensive section on potential rules for “automated decision-making systems,” i.e., AI-backed processes.
In the past, the FTC has relied on consumer protection laws to prosecute AI-related actions. For example, the FTC mandated that Weight Watchers remove an entire AI system it created for a weight-loss program. The FTC said Weight Watchers advertised the app to children under 13 and gathered children’s data without parental approval, in violation of the Children's Online Privacy Protection Act.
Generative AI: Is using copyright-protected materials without permission intellectual theft?
Using AI to perform tasks in law offices is a reasonably straightforward process compared to the ethical and legal questions surrounding generative AI. These tools scrape online images, text, and code, then train their algorithms to generate similar results. The original illustrators, writers, and programmers do not grant permission or licensing to use their works, and the creators do not receive attribution or compensation.
Much will be decided about generative AI’s fate through a proposed class action complaint alleging that the AI-powered coding assistance tool, GitHub Copilot, violates open-source licensing requirements, and Microsoft, GitHub, and OpenAI profit from the original works of programmers.
Getty Images has proactively banned AI-generated art over concerns about unresolved copyright issues with images. Thus far, the US Copyright Office places a premium on human authorship, maintaining that applicants must show substantial human involvement in the creation process for AI-generated materials to receive copyright protection.
The USCO recently said, “Copyright under US law requires human authorship. The Office will not knowingly grant registration to a work that was claimed to have been created solely by machine with artificial intelligence.”
Lawyers, be bold and responsible.
Our legal system must evolve to consider the changing nature of technology and its impact on our lives. As AI advances, additional legal issues will arise. Lawyers must boldly allow AI adoption to flourish while erecting guardrails that ensure AI systems are transparent and accountable for safe and ethical use.
Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers. Information/opinions shared are personal and do not represent author’s current or previous employer.