Back to Blog
AI Strategy · 3 min read · Published March 13, 2026

The Open Model Problem Isn't the Model

Matt Genovese
Matt Genovese
Founder, Planorama Design
Open AI models and enterprise governance

MIT Sloan finds 46% of enterprises stall on open AI models due to integration complexity. The real barrier? Organizations haven't defined what they need the model to do.

There's a recurring pattern in enterprise AI adoption that I find fascinating, which is that the conversation about open models almost always starts in the wrong place. Teams debate benchmarks, licensing nuances, and fine-tuning infrastructure (often with great enthusiasm and very detailed spreadsheets), all of which matter, but none of which explain why adoption actually stalls. A recent MIT Sloan study found that 46% of enterprises cite integration complexity as the primary barrier to using open models. That number is striking; not because integration is hard, but because "integration complexity" is what organizations say when they haven't defined what they're integrating the model into, or why.

The Governance Gap Is Really a Requirements Gap

The same study surfaces a second barrier that, in my experience, is more fundamental: organizational governance confusion. Nobody owns the AI decision. There's no formal process for evaluating whether an open model meets the organization's actual needs, because those needs haven't been articulated with enough specificity to evaluate against. You can't select a model you haven't spec'd, and you certainly can't govern one.

This is requirements work, not in the narrow sense of writing user stories, but in the broader sense of defining what the AI system needs to do, what constraints it operates within, what data it touches, and what success looks like. Organizations that skip this step don't end up with a "model selection problem"; they end up with a dozen pilots that each made different assumptions and none of which can be brought into production coherently.

What Open Models Actually Solve (When You've Done the Homework)

What tends to get lost in the open-vs-closed debate is that open models, deployed internally, solve a category of problems that have nothing to do with model performance. When the model runs within your security boundary, a significant number of governance and compliance concerns simply go away. Customer concerns about how their data is processed by third-party AI, contractual language disputes around data handling, the entire class of PII exposure risks that come with sending sensitive information to an external API; all of these become far more manageable when the model is something you operate rather than something you subscribe to.

You can't select a model you haven't spec'd, and you certainly can't govern one.

But (and this is the part that routinely gets skipped, because requirements work is less exciting than spinning up a GPU cluster) you only unlock these benefits if you've done the work to know that data residency, PII handling, and contractual compliance are your actual constraints. Without that precision, you'll evaluate open models on benchmarks alone, conclude they're "close enough" or "not quite there," and miss the fact that the real advantage was organizational, not technical.

The Right Sequence

What I've observed in organizations that successfully adopt open models is a consistent pattern: they define requirements for the AI system before they evaluate model options. That sounds obvious, but it's surprisingly rare. What does the AI need to do? What constraints must it operate within? What does success look like, measured in terms the business actually cares about? Only after those questions have real answers does model selection (open, closed, or hybrid) become a productive conversation rather than a speculative one.

This is the same structured requirements discipline that Planorama applies to product design, extended to the AI system itself. The model is a component. It matters, but it doesn't matter first.

Matt Genovese
Matt Genovese
Founder & Product Strategy Lead

Matt leads Planorama Design, a product acceleration firm for enterprise software teams. With nearly 30 years of engineering experience, he helps CTOs and VPs of Engineering structure requirements, validate AI feasibility, and ship better software faster.

Related articles

AI Strategy · 3 min read

Your AI Interface Is Training Users to Stop Thinking

Most AI interfaces deliver output without helping users evaluate it. Designing for critical engagement belongs in the requirements phase, not the retrospective.

AI Strategy · 5 min read

Vertical-Specific Generative AI: Overcoming Challenges and Unlocking Industry Growth

The generative AI landscape has evolved from general-purpose chatbots into vertical-specific solutions designed for particular industries and workflows.

Let's meet.

Tell us what you're working on. We'll give our honest perspective, and share how we've helped similar teams address their challenges.

Schedule a Discovery Call