Who’s Really Advising the Client? The Blurred Lines of GenAI in Legal Practice

Banner artwork by BOY ANTHONY / Shutterstock.com

Cheat Sheet:

  • Enhanced efficiency. By assisting with drafting, summarizing, and legal research, AI can help to streamline workflows — though attorney oversight remains essential to ensure accuracy and context.

  • In-house GenAI adoption. Generating early-stage legal outputs can reduce costs and improve efficiency — but these gains aren’t guaranteed and depend heavily on context and oversight.  

  • Risks of over-reliance. Biased results, hallucinations, unverifiable inputs, confidentiality breaches, and diminished legal value can arise — especially when attorneys rely solely on validating AI-generated work rather than exercising independent judgment.

  • Evolving attorney-client relationship. Both parties must set clear expectations for AI use, uphold ethical standards, and prioritize independent legal judgment over cost-cutting, ensuring accuracy remains the cornerstone of professional value.

Artificial Intelligence (AI) has rapidly advanced from niche applications to becoming an integral part of legal practice, particularly for high-volume, repetitive tasks like document review and contract management. What began as an efficiency tool for junior associates in Big Law has matured into a transformative technology that now extends into generative AI (GenAI), bringing both potential and perils for the future of attorney-client relationships. As law firms and corporate legal departments adopt and refine these technologies, the future of AI promises a redefinition of how legal services are delivered, impacting attorney-client engagements at every level.

This article explores the potential trajectory of AI in the legal industry, particularly in corporate settings, examining both the near-term and medium-term impacts of AI on attorney-client interactions. We will consider both the benefits of using AI tools and the potential risks, particularly in cases where clients may begin taking an active role in generating preliminary legal outputs through GenAI before consulting their attorneys.

The current landscape of AI in legal practice

AI has been embedded in legal practice for several years, primarily in tools that facilitate document review, contract analysis, and e-discovery. Platforms such as Robin AI demonstrate how automation can help lawyers handle routine tasks more efficiently, at reduced cost, and with fewer errors. These AI tools have allowed law firms to complete large volumes of routine tasks with greater speed and accuracy, alleviating junior associates of repetitive work and allowing them to focus on more substantive legal issues.

This use of AI has not only reshaped the work of junior attorneys but also recalibrated law firm economics, allowing firms to offer cost-effective solutions for corporate clients who might otherwise balk at high legal fees for labor-intensive tasks.

In the near term, law firms will likely continue using AI to optimize these repetitive tasks, enhancing the efficiency of internal workflows without fundamentally altering the traditional attorney-client relationship. However, with the advent of generative AI, law firms and corporate clients alike face a new paradigm.

GenAI enables a “driver-assist” approach to legal work, allowing attorneys to rely on AI for drafting, editing, outlining, summarizing, and even idea generation. Unlike earlier AI tools focused on rote tasks, GenAI engages in creative processes, helping to transform a lawyer’s ideas into text or rapidly produce detailed outlines or summaries. However, like driver-assist features in vehicles, GenAI requires constant human oversight; attorneys must keep their “hands on the wheel” to ensure that AI outputs are accurate and contextually appropriate.

However, like driver-assist features in vehicles, GenAI requires constant human oversight; attorneys must keep their “hands on the wheel” to ensure that AI outputs are accurate and contextually appropriate.

The role of GenAI in corporate legal departments

Looking beyond law firms, in-house legal departments are beginning to explore GenAI as a tool for reducing costs and expediting legal processes. In many corporations, particularly those without a large in-house legal team, executives and employees are experimenting with GenAI to begin answering legal and regulatory questions.

Given their deep knowledge of company operations, in-house legal departments or senior executives can use GenAI to generate preliminary responses to questions or regulatory issues. For instance, by inputting specific concerns, regulations, or business context, corporate clients can receive initial AI-generated insights, which they can then present to outside counsel for further review.

This approach — initiating the journey from “issue” to “answer” with the help of GenAI — may allow clients to engage outside counsel more efficiently. By presenting attorneys with pre-processed outputs, clients aim to shorten the distance between identifying a legal issue and receiving expert guidance. In this context, outside counsel might be called upon to “true up” or refine AI-generated outputs, clarify the information, identify overlooked risks, and provide insights based on specialized expertise. This approach offers an appealing balance between efficiency and expertise, allowing clients to manage costs without compromising legal accuracy.

Risks of over-reliance on GenAI in attorney-client engagements

However, while this hybrid AI-assisted model offers many advantages, it also introduces significant risks. The primary concern is that in-house teams may, in their desire to economize, inadvertently limit the value of outside counsel by constraining their role to merely reviewing AI-generated outputs. This situation could lead to a dynamic where outside counsel is asked to “bless” AI-generated conclusions without conducting an independent and thorough analysis of the underlying issues.

A hypothetical scenario illustrates the risk: A corporate client, using GenAI, prompts the AI to interpret a specific regulatory compliance issue. However, the client’s input may lack certain critical facts or may be framed in a way that inadvertently biases the AI’s response. If the client then presents this AI-generated output to outside counsel for confirmation, the attorney may feel compelled to condition their advice with disclaimers, such as “based solely on the information presented by our client and disclaiming that our firm undertook an independent review.” Such disclaimers could render the guidance effectively meaningless, depriving the client of the specialized expertise they originally sought. The confirmation may be so limited as to be unhelpful or, in the worst case, misleading, as it is based on a potentially flawed initial AI-generated analysis.

Similarly, there is a risk of including information in learning models that a company may want to avoid being exposed. For example, if the fact pattern involves the protection of a trade secret, it may be ill-advised for either the outside counsel or corporate counsel to include specific details into the questions presented to the model. Should either group include the information in the AI model, there is a risk that this information could be used to respond to another user’s inquiry. Alternatively, excluding this information may result in the potentially flawed analysis discussed above.

Beyond factual limitations, another risk arises from the inherent biases that can permeate AI-generated outputs. GenAI systems learn from vast datasets, which can sometimes introduce or amplify biases. For instance, if an AI model is trained on historical data that reflects certain industry norms or regulatory interpretations, its outputs may reflect those biases, potentially leading to one-sided or narrow legal interpretations. In cases where clients use AI-generated outputs as the starting point for complex legal questions, they may inadvertently create blind spots, failing to account for nuanced interpretations or evolving legal standards. This over-reliance on AI can introduce inaccuracies into legal analysis, exposing clients to unnecessary risks.

Beyond factual limitations, another risk arises from the inherent biases that can permeate AI-generate outputs.

The future of attorney-client relationships in a GenAI-driven world

The medium-term future of AI in legal practice will require a delicate balance between efficiency and professional judgment. As clients increasingly experiment with AI tools, the traditional role of outside counsel as a provider of comprehensive and independent analysis may evolve. Attorneys will likely see a growing number of requests to review or confirm AI-generated outputs, adding a new layer of responsibility to their role.

This could create tension in attorney-client relationships if clients seek to minimize costs by limiting counsel’s role to cursory reviews. Outside counsel may need to negotiate more transparent terms around engagements, making clear that their expertise requires comprehensive understanding and that they cannot fully endorse conclusions derived from client-driven AI prompts without independent analysis.

As clients increasingly experiment with AI tools, the traditional role of outside counsel as a provider of comprehensive and independent analysis may evolve.

From a professional responsibility standpoint, attorneys will also need to be vigilant about the ethical implications of AI-assisted practices. The American Bar Association’s Model Rules of Professional Conduct already require attorneys to remain competent in relevant technology, and this requirement may be interpreted to include a thorough understanding of the risks and limitations associated with GenAI.

Law firms may increasingly need to draft client advisories or engagement letters that explicitly address AI use, setting expectations about when and how AI-generated outputs will be relied upon and transparent about what firms will and will not include in AI models. Firms may also implement policies to guide attorneys in evaluating AI-assisted work products and establishing standards for quality control, bias detection, and factual verification.

Further, corporate counsel would be wise to consider modifying outside counsel policies to clearly delineate how outside counsel can use corporate information in AI models while ensuring such uses comport with their own corporate AI policies.

Navigating the road ahead: A collaborative approach

To fully leverage the benefits of AI while mitigating risks, clients and counsel must proceed with what we might call a “collaborative vigilance.” Both parties should approach AI with open minds and eyes wide open, maintaining a shared commitment to transparency, quality, and adaptability. In the ideal scenario, corporate clients and outside counsel will engage in open dialogues about the role of AI in their relationship, recognizing the efficiencies it brings but also acknowledging its limitations.

Clients will need to resist the temptation to “box in” their outside counsel to a role of simply validating AI outputs or taking a conservative approach not to allow the use of AI. Counsel, on the other hand, should emphasize the value they bring as advisors who offer independent insights that AI alone cannot replace.

By developing flexible, mutually understood guidelines for how AI will be used in engagements, clients and attorneys can build trust and foster a relationship that harnesses the power of technology and increases efficiencies without undermining the depth and reliability of legal counsel. 

By developing flexible, mutually understood guidelines for how AI will be used in engagements, clients, and attorneys can build trust and foster a relationship that harnesses the power of technology and increases efficiencies without undermining the depth and reliability of legal counsel.

Integrating AI into legal practice represents a profound transformation that will reshape the dynamics of attorney-client relationships, particularly in corporate settings. While AI tools introduce efficiencies, they also require careful oversight and thoughtful engagement. The future will likely see corporate clients increasingly using AI to frame their initial legal inquiries while outside counsel is tasked with verifying, refining, and expanding upon these preliminary outputs. This evolution requires that both clients and counsel navigate the challenges of AI thoughtfully, proceeding forward with a commitment to clear communication, ethical responsibility, and mutual respect.

As the legal industry adapts to this AI-driven landscape, it is essential that attorneys maintain their role as trusted advisors. Rather than simply endorsing AI-generated conclusions, attorneys should work with clients to ensure that their advice is rooted in independent judgment, informed by context, and attuned to AI’s unique risks and opportunities. With the right approach, AI can enhance, rather than erode, the attorney-client relationship, positioning both parties to navigate the complexities of modern legal practice with confidence and clarity.

Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.

 Generate AI Summary
 ACC AI Summarizer can make mistakes, so double-check the results
Thank you for your feedback!