- Leading Disruption
- Posts
- Decoding AI Jargon (So You Can Actually Use It)
Decoding AI Jargon (So You Can Actually Use It)

Here’s an experience we’re all having more often:
You’re in a meeting, and your head of AI is presenting the latest. Suddenly, they say, "We need to implement RAG with our fine-tuned LLM for better agentic workflows."
The room goes silent.
Executives nod, pretending they know exactly what that means. But there’s confusion in their eyes—the same look I see in boardrooms across industries when AI conversations devolve into alphabet soup.
Here's the thing: You don't need a PhD in computer science to lead AI transformation. But you do need to understand what these terms actually mean—not so you can impress people at cocktail parties, but so you can make smart decisions about which technologies your organization needs.
Let's cut through the jargon together.
The Three Types of AI (And Why They Matter)
When people talk about AI, they're usually talking about three very different things. Understanding these differences is critical because each solves different business problems. It’s something that my co-author, Katia Walsh, and I do at the start of our upcoming book, “Winning with AI”.
Let’s break down the three types of AI: predictive, generative, and agentic.
🔮 Predictive AI: The Fortune Teller
This is the AI you've been using for years, even if you didn't realize it. Predictive AI looks at historical data and says, "Based on what happened before, here's what's likely to happen next."
Real-world examples:
Netflix recommending shows you'll probably like
Your bank flagging a potentially fraudulent transaction
Sales forecasting tools predicting next quarter's revenue
Predictive AI is fantastic at pattern recognition, but it can't create anything new. It's looking backward to inform forward decisions.
⚒️ Generative AI: The Creator
This is the technology that's currently disrupting everything. Generative AI does more than just predict. It creates. It can write, design, code, and generate entirely new content based on patterns it's learned.
Real-world examples:
ChatGPT drafting your email responses
DALL-E creating images from text descriptions
GitHub Copilot writing code alongside developers
Generative AI transforms how we work because it handles the initial creation phase, freeing us up for the strategic thinking that actually requires human judgment.
🚀 Agentic AI: The Autonomous Executor
This is the newest frontier, and it's where things get really interesting. Agentic AI doesn't just predict or create. Instead, it acts on your behalf. These systems take your goals and figure out how to accomplish them, making decisions and taking multiple steps without constant human supervision.
Real-world examples:
AI agents that manage your entire inbox, scheduling meetings and managing follow-ups.
Systems that monitor your supply chain and automatically reorder inventory.
Virtual assistants that research and book travel, comparing options across platforms
The key difference? Agentic AI operates with autonomy. It's not waiting for your next prompt—it's working toward the outcome you defined.
The Technical Concepts That Actually Matter
Now that you understand what different AI types do, let's talk about how they're built and improved on. These are the terms you'll hear in vendor pitches and strategy meetings.
🔷 Large Language Models (LLMs)
Think of LLMs as the brain behind generative AI tools like ChatGPT or Claude. These are massive AI systems trained on enormous amounts of text data.
LLMs learn patterns in language: grammar, context, relationships between concepts, even reasoning patterns. Not all LLMs are created equal, and understanding their capabilities helps you evaluate vendor claims realistically.
🔷 Fine-Tuning
Out-of-the-box LLMs are generalists. They know a little about everything but aren't experts in your specific business. Fine-tuning is the process of taking a pre-trained LLM and training it further on your data.
Think of it like this: The base LLM went to a general university. Fine-tuning is sending it to graduate school in your industry.
🔷 Reinforcement Learning from Human Feedback (RLHF)
RLHF stands for "Reinforcement Learning from Human Feedback," and it's one of the most important concepts in making AI actually helpful (rather than just technically impressive).
Here's how it works: After an LLM is initially trained, humans review its outputs and essentially say "this response is good" or "this response is terrible." The model learns from this feedback and gets better at producing responses that align with human preferences.
RLHF is how AI companies teach their models to be helpful, harmless, and honest. But "helpful" and "harmless" mean different things in different contexts. As you deploy AI in your organization, you'll need to think about your values and how to encode them into the systems you use.
🔷 Retrieval Augmented Generation (RAG)
RAG is one of the most practical techniques for making generative AI useful in business contexts. Here's the problem it solves: LLMs know a lot, but they don't know your latest sales data, your current inventory, or what happened in yesterday's board meeting.
RAG bridges this gap. When you ask a question, the system first searches your company's documents, databases, and knowledge bases for relevant information, then feeds that context to the LLM along with your question.
Think of it like this: Instead of asking AI to answer from memory alone, RAG lets it consult your company's library first.
The Question You Should Be Asking
After every technical explanation, I always get asked: "Okay, but what does this mean for us?"
Here's my answer: Understanding these concepts isn't about becoming a technologist. It's about asking better questions.
Instead of asking “Do we need AI?” ask:
👉 "Do we need predictive capabilities to optimize existing processes, generative tools to transform how we create, or agentic systems to handle complex workflows?"
👉 "Does RAG give us the accuracy we need at a fraction of the cost?"
👉 "What LLM are we using? How are we handling RLHF? What safeguards prevent hallucination?"
The jargon isn't the point. It’s understanding enough to make strategic decisions that matters.
AI Fluency Starts Here
One of the biggest obstacles I see in organizations is the intimidation factor. Leaders feel like they need to understand everything before they can make any decisions.
But let’s turn it around.
You need to understand enough to ask the right questions, evaluate options, and make informed choices. You don't need to code the models. You don't need to explain the mathematics. You need to connect the technology to your business outcomes.
And that’s exactly what you're already good at.
💭 Your Turn
What AI jargon has confused you most in meetings? What technical concepts do you wish you understood better? Drop your questions in the comments, and I might tackle them in a future newsletter.
What I Can’t Stop Talking About
Want to deepen your understanding of AI? Sign up for updates and early access to our upcoming book “Winning With AI,” co-authored by Katia Walsh, which is all about using AI to support your business strategy in today’s rapidly-evolving AI landscape.
Are your teams better at AI than you are? Your people might be turning into superhumans with AI, and managing them takes a different skillset: here’s how to work with the AI superstars on your team.
My Upcoming Appearances/Travel
Oct 21: Calix ConneXions 2025, Las Vegas, NV
Nov 12: Private Client, Santa Barbara, CA
Nov 13: Brilliance 2025, Celebrating Women Disrupting Healthcare Keynote, Chicago, IL

If you found this note helpful, please forward and share it with someone who needs the inspiration today. If you were forwarded this, please consider subscribing.