Most AI conference talk-tracks follow a predictable arc: AI AI AI *cue big promises*, sprinkle in vague advice about "embracing change," and maybe end on a slide with a robot on it. After a few years in the compliance industry, I was glad to hear Jen Gennai's keynote at this year's Compliance Week’s The Leading Edge: Applying AI and Data Analytics in Risk and Compliance conference, taking a different direction.
Gennai spent years as Google's Head of Responsible Innovation and co-authored their AI Principles in 2017. She's seen what works and what fails when organizations try to implement AI at scale. Her message? Stop starting with the technology. Start with the problem you're actually trying to solve. And above all: Don’t approach AI as one-size-fits-all.
She laid out a four-step playbook that's refreshingly practical. No buzzwords, no hype. Just a framework that makes sense for teams—particularly legal and compliance teams—who are under pressure to "do something with AI" but aren't sure where to begin.
This sounds obvious until you realize how many AI projects skip this step entirely. It’s not so long since we all had yearly KPIs that were just “Implement AI processes”. So it’s not surprising that teams jump straight to evaluating vendors or comparing features. They get excited about what the technology can do without asking whether it should.
Gennai's point is simple: Before you look at any tool, define what success actually means. For a marketing compliance team, that might be cutting review time from two weeks to three days. Or processing 40% more content without hiring. Or catching shadow content before it becomes a regulatory issue.
The specificity matters. "Get faster" isn't a goal. "Reduce average review turnaround from 14 days to 3 days" is. When you can measure it, you can justify the investment to leadership. And you'll know whether the project actually worked.
Without this clarity upfront, AI projects tend to drift. They become solutions hunting for problems, which is how you end up with expensive software that nobody uses.
Once you know what you're trying to achieve, the next step is figuring out where the biggest bottlenecks actually live. Not where you think they are, where they actually are.
Map your current workflows. Track where your team spends time. What takes forever? What creates the most frustration? Where do things consistently get stuck? Is Legal always slowing the next viral marketing campaign down? Or are influencers promising things against the FCA’s Truth in Advertising guidance only to be caught by compliance post publishing?
Gennai's says: Resist the urge to start with vendor demos. You can't evaluate whether a platform is right for you if you don't understand your own processes first. And here's what often happens when teams do this exercise—they discover surprises. That process they assumed needed AI might just need better communication between departments. That tedious manual task everyone hates? Turns out it's a perfect candidate for automation.
This is where Gennai's experience building Google's Responsible Innovation team shows. She knows that effective AI emerges from understanding workflows deeply, not from bolting technology onto poorly-defined problems.
Now you need to put numbers on it. How much does your current manual process actually cost?
Say your team spends 20 hours a week reviewing marketing content manually. That's roughly 1,000 hours a year. If the average fully-loaded cost per hour is $150, you're looking at $150,000 annually just in staff time. And that doesn't account for the opportunity cost—what else could your team be doing with those hours? What revenue gets delayed because content sits in review limbo for two weeks?
This step does two things. First, it builds your business case. When you walk into a meeting and say "this manual process costs us $150K a year and creates compliance risk through delays," people listen. Second, it gives you a baseline. If your pilot reduces review time by 60%, you can translate that into real dollars saved and risks mitigated.
Don't try to boil the ocean. Start with something focused. Pick one use case, set clear metrics, and commit to checking in regularly, weekly or biweekly, not quarterly.
The pilot isn't about proving the technology works. It's about learning. Does it integrate smoothly with how your team already works? Are people actually using it? Most importantly, is it delivering the impact you mapped out in step two?
Frequent evaluation means you can course-correct fast. Maybe the AI flags too many false positives and your team starts ignoring it. That's valuable information you need to know immediately, not three months later. Or maybe it's working better than expected and you can expand the pilot sooner.
If it works, you have data to support broader rollout. If it doesn't, you've learned something valuable without betting the farm. This iterative approach reflects how Gennai built accountability into Google's AI systems—constant evaluation against standards, not blind faith in the technology.
There's a reason Gennai's framework resonated particularly well for this audience of AI-enthused legal and compliance professionals. Her background isn't just in AI—it's in responsible AI. She co-authored Google's AI Principles, which center on fairness, accountability, privacy, and avoiding harmful outcomes.
Those principles map directly onto the concerns compliance teams already wrestle with. When you align AI with business goals, you're not just chasing efficiency—you're ensuring you're solving real problems without creating new risks. When you map impact, you're identifying where AI might introduce bias or unintended consequences. When you calculate costs honestly, you're forcing yourself to weigh trade-offs. And when you evaluate pilots frequently, you're building in the accountability your role demands.
A key takeaway from the entire conference, (not just “The Ethics of AI” session by the world’s foremost ethics expert at the Markkula Center for Applied Ethics) is that AI isn't neutral. It requires the same kind of careful, thoughtful implementation that compliance teams apply to everything else. You need a solution that adapts to your company's specific risk tolerance—not one that forces generic industry regulations onto your unique situation. You need something that integrates into your existing workflows instead of creating yet another system your team has to check. And you need measurable outcomes you can actually track, not vague promises about efficiency.
Transparency isn't optional. If you can't understand how the AI makes decisions or adjust rules based on your team's feedback, you're just trading one black box for another. Look for platforms that learn from your reviewers, getting more accurate over time rather than staying static.
Ultimately her advice pushes back against the pressure to move fast and break things. Instead, she acknowledges that compliance teams need to move thoughtfully—not because they're slow, but because the stakes are high.
Join 8,000+ marketers and compliance pros getting clear, useful insights—once a month.