Banner artwork by ImageFlow / Shutterstock.com
Artificial Intelligence tools and large language models are evolving quickly to improve efficiency and productivity in the workplace. Nonprofit organizations can especially benefit from the use of AI technology to help further their exempt missions more efficiently. However, organizations should evaluate the risks and benefits of using these tools and consider the following three legal issues when organizations use AI tools:
1. Data security and confidentiality
AI tools can be very useful for organizations that review, analyze, and evaluate large amounts of content. One example is organizations that determine whether a program is eligible for potential accreditation. Most accreditation programs require applicants to go through a self-study program and provide vast amounts of documentation to show that they meet various accreditation requirements.
Organizations should be mindful when sharing identifiable information with AI tools without ensuring that they have sufficient technical and organizational measures to protect the information shared. Also, large language models (“LLMs”) use machine learning in order to train themselves, and organizations should consider this before sharing personally identifiable data that will be used and shared with other users.
2. Intellectual property use
Many organizations develop educational tools and content to help individuals and communities, and make the information available on their websites. AI tools, such as chatbots, can help organizations engage with individuals to help them find the information they need quickly and efficiently. They can also help organizations better understand what information their audience is looking for, and use that insight to provide more curated content.
While there are many benefits to using AI tools for these purposes, organizations may want to determine whether access and use of their content will also result in LLMs combining their content with the other data and content within their models. This could have intellectual property and licensing implications on the organization, and on third parties who they may license content from.
3. Privacy matters
Organizations with distributed workplaces often use tools like Zoom and Teams to communicate with each other. Some organizational staff use various AI note-taking tools to help them take meeting notes and keep track of meeting outcomes. In order to do this, the AI tools they usually use bots to join and record the meetings, transcribe them, and analyze the transcription.
Organizations using these tools should consider the privacy implications of the recordings and transcription. When hosting meetings, organizations should consider whether to let an AI bot into the meeting, and ensure that everyone has consented to being recorded and to having their words transcribed. This is particularly important since many states require that individuals provide their consent prior to being recorded.
AI tools can help organizations pursue their mission in many ways. With the potential for broad access to content and information, organizations should be mindful of the risks and benefits of using these tools, and consider developing policies to guide staff who seek to use them.
Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.