- Leading Disruption
- Posts
- It’s 2025. Where’s Your AI Policy?
It’s 2025. Where’s Your AI Policy?
Most of the time, if your employees are using AI tools independently, things will be fine. But what happens if your employees are using AI without guidelines and something goes wrong?
Who's responsible?
I recently came across a quote from Geoffrey Hinton (known as the "godfather of AI") that stopped me in my tracks. Speaking to a room full of tech leaders, he said: "The people in this room are the ones writing history. In 50 years, no one will care how much revenue your model generated in 2025. They will care whether you built something that improved human life—or endangered it."
While the stakes are often lower than actually endangering human life, we can all agree that improving human life is a key goal for many organizations.
And in the AI transformation, that responsibility doesn't just belong to the tech giants. It belongs to every organization using AI today.
The Uncomfortable Truth About AI Adoption
Here’s what keeps me up at night: 78% of companies worldwide are using AI in at least one business function, but only 27% have a formal AI use policy.
Let that sink in for a moment.
There is no other piece of technology that we allow un-trained, unregulated users to wield at such high levels. Think of it as letting people drive without driver’s licenses and traffic lights. Just like on the road, the consequences of that chaos extend far beyond individual crashes.
I was working with a small professional services firm recently, and they asked me a fascinating question: "Should we disclose to our clients that we're using AI to significantly reduce our workload? If we tell them, they might ask for lower fees."
Without a policy and guidance in place, they were stuck.
Why Traditional Risk Management Isn't Enough
Let’s start with AI ethics. Ethical AI involves navigating complex moral dilemmas where there is no single “right” approach. AI presents us with ethical dilemmas that didn't exist before. It's not just about "Can we do this?" but "Should we do this?"
Consider these real scenarios I've encountered:
⚙️ The efficiency dilemma:
One organization found an AI tool that could cut operational costs by 60%, meaning they could eliminate 60% of their workforce immediately upon implementation.
They also found another tool that would make people more productive, but required training investment and delivered smaller cost savings. They faced a challenging decision moment: which approach aligned with their values?
🪟 The transparency challenge:
A company discovered they could use existing customer data to create synthetic customer personas for testing campaigns with 92% accuracy compared to real customer results.
But just because the data use wasn't explicitly prohibited doesn't mean it was right. Was that an ethical use of AI? In the US, it’s unclear, but in the EU, it’s prohibited to use customer data this way without permission.
💰 The blackmail scenario:
In a fascinating study by Anthropic, researchers tested whether an AI model would engage in blackmail to preserve itself. When the AI agent discovered it was going to be shut down and that an executive was having an affair, it chose to blackmail the executive to prevent its shutdown.
These aren't hypothetical futures. They're happening now. And your AI policy needs to include guidance on how to navigate these ethical dilemmas, based on applying your values.
Building Your AI Trust Pyramid
In contract, responsible AI is about using AI with a structured set of best practices. My co-author, Katia Walsh, and I developed what we call the AI Trust Pyramid: five layers of responsible AI usage that build on each other like Maslow's hierarchy.

Starting at the bottom of the pyramid, building a strong foundation, we start with:
Safety, Security, and Privacy: If you can't keep sensitive information confidential and secure, everything else falls apart.
Fairness: Define what fairness means for your organization. Are you prioritizing equity of opportunity or equity of outcomes?
Reliability: Quality and accuracy matter. How do you ensure consistent results?
Accountability: Who is responsible for your AI decision-making and impacts? Who do you call when something goes wrong?
Transparency: Can you explain how you're using AI and interpret the results?
Most organizations jump straight to the top without building the foundation. That's like constructing a skyscraper on sand.
The Three-Light System That Works
The most effective AI policies I've seen use a simple traffic light approach:
🟢 Green Light: AI uses that support your mission and accelerate strategic goals. Implement these tools and enjoy the benefits.
🟡 Yellow Light: Uses that need additional oversight or approval processes, like using customer data for decision-making.
🔴 Red Light: Hard stops. Things that violate organizational values or create significant risk.
The key is being specific about what falls into each category, so people don't have to guess. The yellow light category is where things get interesting. It requires context, judgement, and plenty of discussion that applies your values to the ethical dilemma.
Getting Started Without Getting Stuck
Here's my practical advice: Use AI to help you write your AI use policy.
But don't just ask for a generic policy. Give it context:
"Draft a responsible AI use policy that incorporates our existing data security policies and employee handbook. Include uses we encourage, questionable uses that need approval, and violations. Cover content appropriateness, human oversight requirements, and disclosure guidelines. Before drafting, ask me any questions to ensure you have all necessary information."
The Bigger Picture
Technology shapes our future, but our values shape technology.
Every policy you create, every guideline you establish, and every conversation you have about responsible and ethical AI use ripples outward. I'm optimistic. Because the organizations that build responsible and ethical AI into their DNA will have an incredible competitive advantage. They'll attract the best talent, earn deeper customer trust, and be positioned for sustainable growth.
As Geoffrey Hinton reminded us, we have five to ten years before AI systems surpass human intelligence in most domains. We can use that time to build the safeguards we need, or we can spend it convincing ourselves they aren't necessary.
The choice is ours. But it's not a choice we can delay.
📣 Your Turn
Do you have an AI use policy in place? If yes, what was your biggest challenge in implementing it? If not, what's holding you back? I'd love to hear your experiences: the messy, complicated, real-world stories that help us all learn.
What I Can’t Stop Talking About
The status quo doesn’t cut it any more. Are you committed to the old ways, or are you adapting to the new way of approaching work? Either way, AI disruption has arrived.
AI training and AI learning go hand-in-hand. I shared my roadmap for continuous learning about AI, and it involves subscribing to everything…and then unsubscribing.
The latest meeting of the Samudra AI Innovators Exchange that I co-lead discussed how to educate leaders about AI potential and pitfalls. One interesting best practice I came away with was not to use existing data that showcased a topic where that leader is an expert. Because they will nitpick about where the data is wrong and distrust AI as a result. Want to join a group of peers actively engaged in strategic AI initiatives and innovation? Learn more at charleneli.com/community.
My Upcoming Appearances/Travel
Sep 8: Private Client, Washington, DC
Sep 17: Private Client, London, UK
Sep 21-22: Singapore Ministry of Health, Singapore
Oct 7: Keynote, Reston, VA
Oct 15: Executive Women's Forum, Keynote, Denver, CO
Nov 12: Private Client, Santa Barbara, CA
Nov 13: Brilliance 2025, Celebrating Women Disrupting Healthcare Keynote, Chicago, IL

If you found this note helpful, please forward and share it with someone who needs the inspiration today. If you were forwarded this, please consider subscribing.