AI Security: What Most Businesses Get Wrong (And How to Actually Fix It)

Why traditional security thinking fails with AI and what it takes to manage risk without slowing innovation

Gregory Van Duyse

CEO, Leap AI Solutions

8 min read

When businesses start evaluating AI, they almost always end up in one of two camps. The first group gets excited and moves fast, skipping the hard questions. The second does the responsible thing, runs it through their standard risk process, and walks away with a "no" that kills the project entirely.

Both outcomes are a problem. And both are avoidable. The gap between reckless adoption and paralysis by risk model is exactly where the real opportunity sits, and closing that gap starts with rethinking how you approach AI risk in the first place.

1. Applying Traditional Risk Models to a Non-Traditional Problem

This is where most companies stumble before they even get started.

The instinct makes sense. You have frameworks that work. You've used them for years. The problem is they were built for traditional software — systems with predictable inputs and predictable outputs. AI doesn't work that way. When you feed a fundamentally different technology through a framework that wasn't designed for it, you almost always get the same result: the answer comes back "no." Project shelved. Competitive advantage handed to whoever was willing to think differently.

The companies making real progress with AI aren't ignoring risk. They realized the goal isn't to eliminate risk. It's to understand it well enough to manage what's left.

Bottom Line: AI risk isn't a checklist you pass or fail. Control enough of it that you can name what's left, then decide what to do with it.

2. Not Understanding the Three Dimensions That Make AI Risky

Ask most executives what makes an AI deployment risky and you'll get answers about compliance or hallucinations. Those things matter, but they're symptoms. The actual structure of AI risk comes down to three things, and if you're not evaluating all three before you deploy, you're flying blind.

Now layer agentic AI on top. You're not dealing with one AI anymore. You're dealing with multiple agents, each with their own risk profile across all three dimensions, interacting without a human in the loop. The risk doesn't add. It compounds faster than most organizations expect.

Bottom Line: Map every deployment against these three dimensions before anything goes live. It takes an hour and tells you more than any compliance framework will.

3. Thinking Prompt Injection Is Solved (It Isn't)

This one catches people off guard because it sounds like something that should have been fixed by now. It hasn't been.

Prompt injection is still one of the genuinely unsolved problems in AI security. When an AI processes a request, it combines your system prompt with the user's input into one block of text. The model has no built-in way to treat your instructions as more authoritative than the user's input. A sophisticated attacker can write a prompt that quietly overrides your rules, and the model won't flag it.

Putting a guardrail model in front to screen inputs sounds reasonable. It's just not sufficient:

Context management adds another layer of exposure that almost nobody accounts for. Your AI doesn't hold everything in mind at once. If your security instructions get pushed out of the active context window during a long conversation, the model quietly stops following them. No errors. No warning. An attacker who understands this can engineer that situation deliberately.

Bottom Line: Load order matters more than most people realize. Prompt injection testing shouldn't be a one-time exercise. It should be ongoing.

4. Treating "Local vs. Cloud" as a Binary Decision

The conversation usually goes one of two ways. Either keep everything local so data never leaves the building, or use a cloud provider because they promise not to train on your data. Both positions are more fragile than they sound.

The architecture that works sits between those two extremes. Use a small local model as an anonymization layer. Before any sensitive data reaches an external model, strip and tokenize it locally. Get the response back. Reinstate the original values. The frontier model does its job without ever seeing anything identifiable.

Bottom Line: You don't have to choose between frontier model intelligence and keeping sensitive data in-house. The right architecture gives you both.

How Leap 41 Approaches AI Security

A lot of security engagements end the same way: a long process, a detailed report, and a recommendation that kills the project. That's not what we're here to do.

The goal is always to get a business to a point where they understand their residual risk clearly enough to make a real decision about it. The model we use borrows from something businesses already know well: how you manage people. HR exists, in large part, to manage the risks that come with human employees. AI agents aren't people, but the risk vectors are close enough that the same thinking applies.

Our layered approach:

  1. Governance First. Clear answers to basic questions before anything else: who has access, what decisions need a human, and what happens when something goes wrong. Defined roles, escalation paths, and documented policies for bringing AI tools in and out.
  2. Zero-Trust Infrastructure. Secure the environment, not just the model. Identity and access management, data classification, and network segmentation so a compromised agent can't move through your environment.
  3. Data Hygiene Before Model Access. Normalize, classify, and tag your data before it reaches a model. This fixes both performance and security problems at once, which makes the business case for doing it properly much easier.
  4. Deterministic and AI Guardrails Together. Rule-based checks for known PII patterns, combined with AI-based semantic analysis for intent. Neither is sufficient on its own.
  5. Context Engineering for Security. The order you load instructions into context affects whether they survive a long conversation. Critical security rules need deliberate positioning, not just a spot in the system prompt assumed to persist forever.
  6. Human-in-the-Loop by Default. The checkpoints where a human makes the call aren't inefficient. They're what makes everything auditable. Build them in from the start because retrofitting them later is expensive.
  7. Quantum-Resistant Architecture. Encryption is not a permanent guarantee. "Harvest now, decrypt later" attacks are already happening. The time to reduce dependency on encryption-only security is before quantum capability matures, not after.

The methodology gets you to 80% of risk controlled through governance, education, and targeted technical controls. The remaining 20% becomes something you can see, name, and decide on deliberately. That 20% is manageable. Ignoring the first 80% is where things go wrong.

Getting Started: Your Next Steps

  1. Map every AI deployment against autonomy, data access, and external access. You'll surface your biggest exposures in the first sitting.
  2. Clean your data before anything else. Classification and tagging first. Everything else depends on it.
  3. Design the anonymization layer. Decide what stays inside your perimeter and what goes out, and build that separation deliberately.
  4. Try to break your own system. Run a basic prompt injection exercise. If your team hasn't tested it, you don't know how it holds up.
  5. Book a diagnostic with our team. We'll identify where your residual risk actually sits and build a path forward — one that lets you move, not one that gives you reasons to stop.

Ready to build AI that's actually secure? Download our free 7 Pillars report at insights.leap41.ca

More Insights

8 min read

AI Security: What Most Businesses Get Wrong (And How to Actually Fix It)

Most businesses approach AI security in one of two ways: move too fast and ignore the risks, or apply outdated frameworks that shut everything down. Neither works. The real opportunity lies in understanding AI risk differently across autonomy, data access, and external exposure and building systems that balance control with progress.