Banner artwork by everything possible / Shutterstock.com
As we ride the wave of artificial intelligence advancements, it’s essential to remember that innovation without intentionality — without a focus on ethics — is like building a house without a blueprint. Sure, the structure might stand, but over time, cracks will appear, and the foundation will weaken — leading to increased scrutiny from regulators and erosion of public trust.
As legal and compliance professionals, we are no longer just passive observers of this AI revolution — we are active participants. It's not enough to simply create; we must create with intention, ensuring that AI innovations are guided by strong ethical foundations. So, as we continue to guide product and engineering teams, let’s keep in mind this: What will your successors think about the ethical structures you've put in place? Will they thank you for laying a solid foundation, or will they be left cleaning up a mess of unintended consequences?
Now, let’s first clarify what I mean when I talk about innovation and intention (because, frankly, we lawyers like to define things before we dive in, don’t we?):
- Innovation is about finding new and better ways to solve problems and turning those ideas into practical applications that (hopefully) add value to the world.
- Intention refers to the deliberate thought or plan behind action. It’s not just about having a vague idea; intention involves making a commitment to a purpose. It shapes decisions and guides actions toward a specific, often ethical, outcome.
We’ve seen incredible advances in AI over the last few years. And while this journey began long before ChatGPT and Copilot made headlines, we’re now on a path where AI is no longer an abstract future — it’s here, it’s integrated into our lives, and it’s here to stay. Sure, there are disclosure obligations regarding AI usage, particularly for regulated industries, but those rules are becoming less daunting as the regulatory landscape evolves. As a legal professional working in tech and a parent concerned about the future, I see the rapid advancements in AI from both a professional and personal lens. While I’m excited about the potential of AI, I’m also mindful of the responsibility we carry to ensure it benefits society as a whole.
Externally, companies may produce slick marketing materials, detailed reports, host webinars, or maintain trust centers to address AI ethics and transparency obligations, but internally — have you really done the work of committing to these values? Are you embedding ethical practices into your day-to-day decision-making, or are you just crossing your fingers and hoping for the best?
Practical tips for innovating AI with ethical intention
Gather your internal AI experts
I’ll admit it: I was reluctant to label myself as an "AI expert" for a long time. I thought, "Sure, AI is a hot topic, but seasoned experts are still hard to find." What I failed to realize, though, was that the legal and compliance implications of AI are just as important as the technical innovations. If your organization doesn’t already have a cross-functional group focused on AI, it’s time to create one. Think of it as your AI Ethics Council — a diverse team from legal, security, marketing, sales, customer success, engineering, and even HR. This group should meet regularly to ensure that AI development and support align with ethical standards, legal compliance, and best practices. Remember, the ethical implications of AI aren’t just the responsibility of a tech team — they’re everyone’s responsibility.
Remember, the ethical implications of AI aren't just the responsibility of a tech team — they're everyone's responsibility.
Get clear on how AI is used and produced
Is there a “single source of truth” in your organization that details how and where AI is being used, developed, or deployed? Not only is this crucial for internal governance, but it may also become a regulatory requirement as AI regulations evolve. Even if it’s not mandatory yet, wouldn’t you want to know exactly what you’ve built or leveraged and where it’s being used? How can you ensure compliance, mitigate risks, or support teams if you don’t have a clear understanding of your own AI ecosystem? So, take the time to map out your AI landscape — it’s a critical step in responsible innovation.

Be transparent about your AI
In certain jurisdictions, such as California, Colorado, and Utah, there are already laws requiring AI transparency for AI providers. But beyond legal obligations, transparency is just good business practice. Customers and partners will ask about your AI, and when they do, don’t leave your sales or commercial legal teams hanging with last-minute negotiations. Be upfront about the AI you’re using or selling, its capabilities, and its potential risks. It’s not only the right thing to do; it’s the smart thing to do.
Be upfront about the AI you're using or selling, its capabilities, and its potential risks.
Check your work (and keep checking)
Here’s a tip that’s older than my first legal brief: Always check your work. When you innovate, whether it’s AI or anything else, it’s essential to periodically reassess. Does your AI system still work as intended? Is it still aligned with the ethical principles you set out at the beginning? Have you considered bias, security, and transparency in your AI’s development? It’s easy to get caught up in the excitement of building the next big thing and forget about the long-term responsibility of ensuring that it operates ethically.
In the European Union, this aligns with GDPR’s accountability and transparency principles. You need to be able to demonstrate compliance, especially when AI is involved in processing personal data. But even if your jurisdiction doesn’t mandate it yet, don't let complacency creep in. Regular audits, risk assessments, and evaluations are key to maintaining the integrity of your AI systems.
Future-proof your AI governance
As legal and compliance professionals, we can’t just focus on the “here and now” of AI — we need to think ahead. Regulatory frameworks are evolving quickly, and AI is at the forefront of new laws and standards. If you are not staying up to date with AI trends globally, now’s the time to build internal processes or partner with outside counsel to track regulatory changes. Tools like AI Policy Labs and LexisNexis can help. Are you preparing for increased scrutiny? Set up a regular review process to ensure your AI initiatives align with emerging laws. Proactively document how your AI models are used, how decisions are made, and the safeguards in place to protect consumers. Anticipate future guidelines — whether from the EU AI Act or Organization for Economic Co-operation and Development (OECD) Principles — and shape your governance accordingly. A little forward-thinking now can save you from a lot of headaches later.
Wrapping it up
AI innovation is an incredible opportunity — one that brings vast potential for improving lives, business operations, and even society at large. But with great power comes great responsibility (yes, I had to sneak that in). As legal professionals, we are the stewards of this responsibility. We are the ones who must ensure that AI isn’t just powerful — it’s ethical.
So, as you continue to support your engineering, product and development teams, ask yourself: Are we being intentional with our innovation? Are we setting up the right governance structures, taking the necessary steps to ensure ethical practices, and preparing for the future? If the answer is “yes,” then congratulations — you’re on the right track. If not, it’s time to roll up your sleeves and start making changes.
Because, in the world of AI, innovation without intention isn’t just risky — it’s reckless.
By being deliberate, transparent, and forward-thinking, we can shape the future of AI into something that benefits society as a whole, without losing sight of the ethical principles that make it all worthwhile. So, let’s innovate with intention — and ensure we’re not just shaping the future, but shaping a responsible and sustainable one.
Disclaimer: The information in any resource in this website should not be construed as legal advice or as a legal opinion on specific facts, and should not be considered representing the views of its authors, its sponsors, and/or ACC. These resources are not intended as a definitive statement on the subject addressed. Rather, they are intended to serve as a tool providing practical guidance and references for the busy in-house practitioner and other readers.