Banner artwork by Stock-Asso / Shutterstock.com

Cheat Sheet

  • Unavoidable change. The widespread adoption of AI technologies has inevitable consequences for HR and the legal departments that work with HR systems. In-house counsel cannot ignore AI, and outright prohibition is unlikely to succeed long-term.
  • Potential benefits. AI-powered HR technology has tremendous potential to improve efficiency, enhance privacy, and interrupt bias — if the systems are implemented properly.
  • Likely challenges. With those benefits comes inherent risks. In-house lawyers should be aware of potential baked-in biases, hallucination, and compliance and ethics issues.
  • Mitigating risks. The legal department should take the lead on mitigating AI risks in the workplace, starting with well-crafted policies and proper training. In-house counsel should do the research, but be sure to involve key business partners in the process.

In the world of human resources (HR), innovative technologies are continuing to change the way that businesses hire and support their employees. Recently, there has been a significant increase in the use of both artificial intelligence (AI) and intelligent augmentation (IA) within HR systems — and in-house counsel should take care to understand how these new technologies are being used in their organizations to both maximize the benefits of AI and minimize its risks.

Although the terms AI and IA tend to be used interchangeably, AI typically refers to computer systems designed to operate autonomously to perform tasks that typically require human intelligence (e.g., learning, reasoning, problem-solving), while IA (also known as “augmented intelligence”) seeks to amplify human intelligence and capabilities by integrating human expertise into computer-based systems. For example, augmented intelligence can be integrated into HR systems to “automagically” collect employee information, analyze hiring data, review candidates, and assist with decision making in the workplace in countless other ways. For the purposes of this article, the term “AI” will be used to refer to both categories of technology.

In-house lawyers are rarely known for their risk-taking tendencies, but ignoring AI won’t make it go away, and prohibiting it in your workplace won’t eliminate its usage. To be an effective advisor to your business partners, you must familiarize yourself with the technology’s benefits, evaluate for potential risk, develop policies to govern their use, and train employees about how to use time-saving AI tools while remaining legally compliant, especially when dealing with applicant and employee data.

Potential benefits of AI-powered HR technology

Efficiency

One notable advantage of using AI technology in HR processes is its ability to automate repetitive tasks and thereby increase overall efficiency in the workplace. For example, AI can offer timely reminders about HR deadlines or pre-fill forms based on prior submissions using learned patterns of employee behavior. AI-powered apps can help with scheduling of applicant interviews and other meetings, and HR professionals can use AI tools to generate templates and draft communications. These uses of AI technology can allow employees to focus on higher-value uses of their time, rather than rote tasks that can be mostly automated.

Perceived privacy

Some employees may prefer to consult an AI-powered chatbot about their employment-related questions, rather than HR or their supervisor. In fact, a 2019 study found that 64 percent of employees seeking advice would trust a “robot” over their manager. For especially sensitive HR matters, like a referral to an employee assistance program (EAP) or coverage questions about certain medical procedures, interacting with AI rather than a colleague may appeal to some, even if the technology retains a record of the conversation rather than offering true anonymity.

Interrupting bias

AI technology can reduce bias by anonymizing certain characteristics about applicants, standardizing interview formats and screening questions, and providing consistent information about the hiring process to all candidates through AI-powered chatbots. Implementing AI tools in this way can ensure that all candidates are on a level playing field throughout the hiring process, limiting the effect of human bias in employment decisions.

Challenges of incorporating AI into HR processes

Potential baked-in biases

While AI tools can be used to mask demographic information on applications for employment, those tools are built by humans — and trained on existing data — both of which may have biases that unintentionally shape the tools. Mitigating these effects requires careful human supervision and continuous monitoring of the technology to ensure that any potential biases in the system are corrected before they affect hiring decisions or workplace behavior. To address this risk, laws and guidance governing the use of AI in HR processes have emerged to suggest or require that such tools be audited for potential bias, like New York City’s law regulating the use of automated employment decision tools and the OFCCP’s recent guidance regarding the use of AI by federal contractors.

Hallucination

Another risk of using this technology is the potential for errors or inaccuracies in the information provided by AI, called hallucinations. AI technology often operates based on processing substantial amounts of data and recognizing common patterns of behavior. When prompted to provide information that falls outside its scope of knowledge, the AI tool may respond based on inference or following a previous pattern, which can lead to inaccurate or fabricated data. These risks are most likely to appear when using generative AI, such as chatbots and large language models (LLMs). HR professionals using an AI tool to draft a policy or communication should verify that the information contained with the drafted text is accurate rather than a hallucination.

[Read more: Legal Tech: 5 Use Cases for Large Language Models in Legal Departments]

Compliance

Most workplace HR systems contain sensitive employee data, including contact information, pay and employment history, and performance evaluations. Data protection laws govern how and when such data may be shared or used, and AI-focused laws add a layer of complexity to that compliance strategy. Although there has been a recent increase in regulatory activity relating to this technology, AI is mostly outpacing the legislative process; however, existing laws and regulations, like the 1978 Uniform Selection Guidelines on Employee Selection Procedures, can provide useful frameworks for employers to follow. Recent guidance from federal agencies, like the US Department of Labor’s “Principles for Developers and Employers” points to AI as an enforcement priority, at least in the short term.

Ethics

The early adopters in your workplace may be eager to try every new AI-enabled tool hitting the marketplace, but it’s important to consider not just whether you can launch new technology but also whether you should. We can’t expect AI tools to always reflect our own values. For example, fast-moving innovations in AI technology, like employee monitoring tools that track employee work habits and stress, make this inquiry all the more important.

Mitigating the risks of using AI in HR

Policies

After familiarizing yourself with current AI tools on the market and evaluating which ones pose acceptable levels of risk for your organization, you should work with your business partners to develop policies and procedures for how the technology may be used, what controls are required to ensure its continued alignment with your company’s values, and what approvals and escalations may be required in the event something goes awry. If you haven’t already done so, consider adding an AI policy to your employee handbook or policy library.

To protect your organization’s confidential information (like employee data), ensure you address what information employees can (and cannot) input into an AI tool, even if that tool is part of your internal network rather than a web-based tool available to users outside the company. As a nonprofit research institute, our organization assembled a cross-functional team of experts across key functions to develop a comprehensive AI framework which not only provides guardrails for the use of AI by employees generally but also considers how these tools might impact the work we do, the communities we serve, and our clients.

Training

A written policy is most effective when accompanied by strategic communications and training to reinforce the message. Your HR counterparts may welcome the opportunity to attend customized training, developed by in-house legal experts familiar with your organization’s business model, about how the rapidly changing field of privacy and cybersecurity, including AI technology, affects their day-to-day. Topics might include how to use AI responsibly, recent laws regulating the use of AI in the recruitment and hiring process, or examples of HR privacy gone awry due to AI tools and the consequences of such missteps.

Bottom line: Do your research and start the conversation with your key collaborators at work. Opening the door may also encourage your colleagues to ask you questions and seek your advice regarding new AI tools prior to implementing them so you can proactively identify and mitigate potential legal risks on the front end. Below are some resources to help you get started in your AI technology research: