- Leading Disruption
- Posts
- How to Build the Custom AI Model You Actually Need
How to Build the Custom AI Model You Actually Need

Picture this: You're in a strategy meeting and someone on your tech team confidently announces, "We need to build a custom AI model."
Everyone nods. The budget gets approved. Months pass.
And then you realize the general model you started with would've worked just fine … and cost a fraction of the money.
I've seen this happen more times than I care to count. And it's why I devoted my latest livestream to breaking down what "custom AI model" actually means, when it makes sense, and when it's just expensive theater.
The Bloomberg Reality Check
Let’s start with a cautionary tale.
In early 2023, Bloomberg launched Bloomberg GPT. They spent an estimated $3.5-8 million training their very own large language model on 50 billion parameters of proprietary financial data. This was massive, cutting-edge work.
But within months, general models like GPT-4 had already eclipsed their custom model's capabilities.
All that investment, all that proprietary data, all that effort … and they would've gotten better results just using ChatGPT.
What You're Really Customizing (And Why It Matters)
When you customize AI for your business, you're not building a new brain. You're teaching an existing brain about your specific world.
There are two main ways to do this:
🔎 Fine-Tuning
This is when you train a model on your data upfront. This data could include customer service procedures, best practices, knowledge bases, and more. Once you upload your data, it’s locked in. Over time, the model becomes faster and more consistent, but if your information changes, you have to retrain it on new data.
🔁 Retrieval Augmented Generation (RAG)
This approach lets the AI pull current information from your databases in real-time. When someone asks a question, the model retrieves relevant data from your systems, combines it with its general knowledge, and generates an accurate response.
When you update your database, the model instantly reflects those changes. No retraining required.
(There’s also RLHF for LLMs – see last month’s newsletter for a deep dive into all the jargon!)
Your AI Infrastructure Should Be Lego Blocks, Not a Cathedral
In the book Katia Walsh and I are finishing, Winning with AI, we talk about four layers of AI infrastructure: data, models, orchestration (the systems that make everything work together), and consumption (the interfaces people actually use).
The key principle? Build with Lego blocks, not stone.

Everything in AI is changing too fast for rigid architectures. Your data changes. The models improve. The ways people want to interact with AI evolve. You need the flexibility to assemble, disassemble, and reassemble quickly.
That's why we're skeptical of massive custom model investments that lock you into one approach or one vendor.
💡 Modular and flexible wins every time.
The Smart Way to Use Multiple Models
Here's where it gets interesting: you don't need one powerful model to do everything.
Start by using a simple, inexpensive open-source model (like Meta's Llama) for basic routing: deciding whether a customer inquiry goes to a bot or a human. Save your more powerful models (like GPT-4 or Claude) for complex analysis and deep reasoning.
It's like using a sledgehammer to crack a walnut. Sure, it works. But why not save the sledgehammer for when you actually need it?
This is where agents come in. Agents are specialized AI systems that can:
Route queries to the right model based on intent
Coordinate multiple models working on different tasks
Provide observability, helping you understand how decisions are being made
Act as critics, evaluating outputs and sending poor responses back for improvement
Start Small: The Custom GPT Stepping Stone
You don't need a massive technical team to experiment with customization.
I created a Custom GPT that contains all my writings and four years of podcast transcripts. It's essentially my second brain—a repository of my thinking that I can query anytime.
When I get interview questions in advance, I feed them to my Custom GPT. It generates responses based on my own voice and ideas, putting words on the blank page and sparking insights I'd forgotten.
That's RAG in action, on a small scale. And it's a perfect way to test whether customization actually adds value before investing in enterprise-level solutions.
The Questions You Should Be Asking
When someone on your team proposes a custom model, here's what to ask:
🔶 Strategically:
What business priorities does this support?
Are we automating good processes or just making bad processes faster?
How does this reinvent the way we work, not just replicate what we already do?
🔶 Operationally:
What changes for our teams?
How will this fit into existing workflows?
What metrics will we use to measure success?
🔶 Technically:
Can we explain the model's outputs?
Are we locked into one vendor?
Do we maintain flexibility?
🔶 From a risk perspective:
Who do we go to when we need fixes?
What happens if the model gets something wrong?
Who's accountable—and I mean actually accountable, not just the tech team?
These questions separate strategic AI investments from expensive distractions.
The Real Bottom Line
Most organizations don't need to build their own large language model. They need to teach existing models about their specific world, using approaches like RAG that give them flexibility as things change.
And here's the thing that matters most: if you can't connect your AI customization directly to business outcomes, you shouldn't do it. The question shouldn't be "What's the ROI of AI?" It should be obvious because you're using AI to accomplish clear business goals.
Custom doesn't mean complicated; it means focused on what actually moves your business forward.
💭 Your Turn
Are you customizing AI models in your organization? What approach are you taking: fine-tuning, RAG, or something else? What's working, and what's been harder than expected?
What I Can’t Stop Talking About
AI is making us more human. Sound counterintuitive? I’m seeing AI emphasize the human skills we all have in common: listening, discernment, trust-building, and decision-making.
Are you implementing AI fast enough? Rapid implementation is key for AI projects to make it out of the pilot phase. Here’s how leaders can prioritize speed while balancing it with respecting the dignity of those affected by accelerated change.
The book, The Mirror Effect by Dr. Sheila Gujrathi. This powerful book is a guide for disenfranchised leaders–women, people of color, and other historically marginalized professions–who want to shatter the barriers that hold them back. The “mirror effect” is “when we surround ourselves with people who see us clearly and reflect our authentic power back to us.” I wish I had had this book by my side when I went through my bouts of imposter syndrome as it provides practical steps and exercises on how to maximize your potential. Available November 4th.
My Upcoming Appearances/Travel
Nov 12: Private Client, Santa Barbara, CA
Nov 13: Brilliance 2025, Celebrating Women Disrupting Healthcare Keynote, Chicago, IL
Feb 27-28: OrthoForum 2026, Keynote. Tampa, FL

If you found this note helpful, please forward and share it with someone who needs the inspiration today. If you were forwarded this, please consider subscribing.