- Leading Disruption
- Posts
- The Playground Paradox: Why Boundaries Accelerate AI Adoption
The Playground Paradox: Why Boundaries Accelerate AI Adoption


I've been digging deeper into a question that keeps coming up with leaders:
Why are smart, capable organizations struggling so much with AI adoption?
The pattern is familiar. We show people amazing AI capabilities and explain all the benefits, and they respond with, “No, thank you. Put that voodoo in the corner where I don't have to look at it.”
What's going on here?
After years working in the transformation space, I can guarantee you one thing: It's never about the technology. It's always, always about the people. This is why I focus so much on leadership—it's absolutely essential for navigating these waters.
The Cultural Current Against AI
One of the most fascinating statistics I've seen recently comes from the Stanford AI Index. When they surveyed people across different countries about whether AI would be beneficial or harmful, the results were striking.
In China, 83% of people believe AI products and services will be more beneficial than harmful. In the United States? Just 39%.
This means if you're leading an American organization, you're swimming against a powerful cultural current. The predominant way of thinking about AI in the U.S. is suspicious—that more harm than good will come from it.
This cultural context explains a lot about organizational resistance. You're not just fighting individual concerns; you're pushing against deeply ingrained cultural beliefs.
Understanding the Psychology of AI Resistance
When we look closer at why people resist AI, several factors emerge:
Fear of the unknown: AI is new, complex, and often perceived as unpredictable. People fear what they don't understand, especially when it threatens their jobs.
Loss of autonomy and control: We keep hearing that AI will change workflows, take over tasks, and make decisions. People naturally worry about losing control over their work.
Threat to competency and mastery: AI systems can perform certain tasks better than humans, raising existential questions like "What is my value?" People who've spent decades building expertise suddenly feel their skills are devalued.
Lack of trust: There's mistrust of AI itself (biased outputs, wrong answers) and mistrust of leaders and organizations (surveillance concerns, data privacy).
Cognitive load and social pressure: Learning AI tools requires mental effort, and being an early adopter when peers resist can be socially risky. In cultures that punish failure, experimenting with AI feels unsafe.
Previous disappointments: Many have experienced failed digital transformations or seen technologies like social media not live up to their promises.
Unclear communication: When leaders aren't transparent about how AI will be used—especially regarding jobs – people assume the worst.
These concerns aren't irrational. They're perfectly normal human responses to change and uncertainty. Recognizing them is the first step toward addressing them.
The Counterintuitive Solution: More Constraints, Not Fewer
Here's where things get interesting. Most leaders assume the best way to drive AI adoption is to open the floodgates—let people explore freely and figure it out themselves.
I've found the opposite is true: By putting in limitations, you're actually expanding possibilities.
This sounds counterintuitive, but consider the pros and cons:
Pros of constraints:
Clarity and focus about AI's purpose
Ability to make quicker decisions
Increased confidence that you're pursuing the right areas
Better risk management
Surfacing and addressing unspoken fears
Building trust
Cons of constraints:
You might miss some opportunities
Some potential reduction in creativity (though I actually believe constraints increase creativity)
Possible dissatisfaction from those who want to pursue pet projects
I recently interviewed with an organization that had created a list of 90 potential AI use cases—a common approach. But as they developed their strategy, they realized "use cases aren't a strategy." They went back to their business objectives, identified their top three strategic goals, and narrowed those 90 use cases to the few that would have the most significant strategic impact.
By going around to all stakeholders who'd proposed use cases and explaining how they were prioritizing initiatives that supported the broader business strategy, they built much greater alignment and trust.
Why Boundaries Create Safety and Speed
There are several important ways constraints and boundaries help organizations move faster with AI:
They reduce overwhelm. Instead of tackling "the whole world of AI," teams can focus on specific applications that directly support strategic goals.
They increase clarity. Everyone understands what you're doing and not doing with AI.
They channel energy and enthusiasm into focused areas rather than dispersing it.
They foster creativity. Ask any artist or architect – the worst thing is a blank canvas. Constraints are actually the source of creativity. When you have limitations – whether in space, regulations, resources, or time – you become more innovative within those boundaries.
They manage risk by limiting experimentation to areas with acceptable risk levels.
They build psychological safety. By creating clear boundaries, people feel safe to explore within them.
They accelerate iterations because decision-making is focused on narrower domains.
Think about a playground full of children. If there's a fence, where do the kids play? Right up to the fence line. If there's no fence, they stay in the middle and don't explore the edges—because they don't know how far they can go and still be safe.
The more clearly you define the sandbox people can play in with AI, the more confidently they'll explore it.
Creating Psychological Safety Through Strategy
One of the most powerful ways to build psychological safety is through a clear AI strategy that articulates:
What AI will and won't be used for. Strategy is about choices – both what you will do and what you won't.
How it will affect jobs and roles. When people's livelihoods are at stake, transparency is essential. If you don't articulate this clearly, people will make assumptions – and they won't be positive ones.
Your implementation plan. What will you deliver with AI over the next six quarters? What value and impact will it create each quarter? Things may change (this is AI, after all), but having a clear plan builds confidence.
When your AI strategy aligns with your broader business strategy, people think, "This has been thoroughly considered. They're addressing my concerns. I can begin to trust this approach."
The Six-Quarter Walk methodology I've developed is particularly valuable for AI implementation planning. Unlike traditional annual planning cycles, this approach helps you map out a rolling 18-month horizon with quarterly milestones while maintaining the flexibility to adapt as AI capabilities evolve. I'll be digging deeper into this methodology in my May 27th livestream, but you can also learn more about it in this article Where Structure Meets Flexibility: How The Six-Quarter Walk Will Guide Your AI Strategy.
If you're navigating these kinds of questions with your team, you're not alone. I’ve been working with leadership teams to create the clarity, constraints, and confidence needed to move forward. If you're curious what that could look like for your organization, here’s where you can learn more.
Leadership is Absolutely Required
More than anything else, this moment demands courageous leadership. If you knew all the answers, it would be easy. But you don't— and that's why courage is essential.
It's particularly challenging now because you're fighting those cultural headwinds of fear about AI. You'll face personal pushback because people are worried.
Being aligned as a leadership team, clear about your purpose, standing firm in your convictions, communicating constantly, and genuinely listening to concerns—these are the hallmarks of the leadership needed to navigate AI transformation.
Remember that resistance isn't a sign of a bad team. It's a signal that your team needs more clarity, more constraints, more boundaries to help them feel secure in exploring this new territory.
When you create a firm foundation— clarity about your mission, purpose, values, and strategy—you give people something solid to stand on while they venture into the unknown. As I like to say, you can only jump high from a firm foundation. If the ground beneath you is soft and mushy, you can't push off very far.
Constraints don't limit your potential. They focus your energy and create the psychological safety needed for exploration and growth.
💭Your Turn
What constraints or boundaries have you established around AI in your organization? Have you seen evidence that clearer boundaries actually increase psychological safety and speed of adoption?
What I Can’t Stop Talking About
The baby steps to transformation. Two years ago, I knew nothing about AI—my coding experience was limited to ancient HTML. Now I build functional apps. Not perfect ones, but that was never the point. Transformation isn't about quantum leaps—it's stretching just beyond comfortable, one step at a time.
Six myths about AI debunked. Are misconceptions about data preventing your organization from moving forward with AI? Many leaders fall into common traps that stall progress. Here are six myths you can counter with practical insights to build a more effective, realistic AI strategy.
How to lead in times of disruption. I had an amazing conversation with Bruce Temkin on his podcast "Humanity at Scale" about how leaders can turn disruption into competitive advantage. We explored creating psychological safety while pushing boundaries, balancing empathy with bold vision, and why the best leaders don't just adapt to change—they drive it. Learn more here.
My Upcoming Appearances
Jun 10: Betterworks Webinar, Virtual
Aug 29: Indy SHRM, Indianapolis, IN
Nov 13: Brilliance 2025, Celebrating Women Disrupting Healthcare, Chicago, IL

Charlene Li
If you found this note helpful, please forward and share it with someone who needs the inspiration today. If you were forwarded this, please consider subscribing.