The Weights You Keep
You keep the weights. Three words that should be on every enterprise AI contract signed in 2026. They almost never are.
Every enterprise AI vendor in 2026 is selling you one of two things.
Some sell you a subscription to their model. You send data to their API, they send completions back, you pay per token. If they reprice, deprecate, or go under — your production workflow breaks.
Others sell you the right to fine-tune on their platform. Your data becomes features in their model. When you leave, the model stays with them.
Both of these are rentals.
In 2026, the smartest enterprise AI buyers — the ones with regulated workloads, long product cycles, and serious procurement teams — are walking away from rentals. They're walking away with the weights.
Why Now
Three things changed in the 18 months between mid-2024 and early 2026:
1. Open-weight models got good. Llama 3 was a credible alternative. Llama 4 is a credible default. Qwen 3 leads code. Mistral Large 2 leads reasoning at its size class. On domain-specific tasks — drug discovery, medical imaging, legal review — fine-tuned open models routinely beat frontier general models.
2. Fine-tuning infrastructure matured. What required a PhD-level MLOps team in 2023 is a paved road in 2026. LoRA, QLoRA, DPO, constitutional fine-tuning — they all have production-ready implementations.
3. The "what happens when you leave" conversation became a procurement checkbox. Every enterprise buyer in 2026 now asks the same question: "When our contract ends, what do we walk away with?" Vendors that answer "nothing" are losing competitive bids they would have won a year ago.
The Math of Ownership
Let's make this concrete.
A mid-market pharma company runs a molecular screening workflow 2,000 times a day. They have two vendor paths:
Path A: Rent. $0.008 per API call to a frontier vendor. Fine-tune data stays in the vendor's platform. Annual cost ≈ $5.8M at volume. Stop paying → screening capability disappears.
Path B: Own. Start from an open chemistry-specialized base model. Fine-tune on the company's historical screening decisions. Run inference on their own cluster. First-year cost is higher (setup + training compute). But:
- Year 2+ marginal cost is ≈ 1/8th of Path A
- The model is an intangible asset on the balance sheet
- When the contract ends, the pharma company still has the trained weights
- Every reviewed decision compounds the model's accuracy on their specific distribution
Over a five-year horizon, Path B isn't just cheaper. It produces a company-specific asset that the rental model can't.
What Makes This Hard (And What Makes It Possible)
The hard part is not "fine-tuning a model." That's the least hard part.
The hard part is the workflow around the model: getting real reviewed decisions into the training pipeline, maintaining evaluation rigor as the model evolves, proving to regulators that the model that scored an FDA submission last quarter is the same model that scored it this quarter.
What makes this possible in 2026 is a new category of production workflow platforms that sit between the enterprise team and the model. The workflow runs the job. The reviewed work becomes training signal. The model gets better. The weights stay with the enterprise.
This is the Domain Labs pattern. Every lab — drug discovery, medical imaging, manufacturing QA, financial risk — runs on the same shape:
- Start from a proven open-source model for the domain.
- Run the workflow your team already knows.
- Reviewed work fine-tunes the model on your data.
- You leave with a stronger model than you started with.
The Uncomfortable Truth for Incumbents
Every enterprise AI vendor built on the subscription model has to answer the ownership question eventually.
The ones building to survive 2026 are already changing their contracts:
- Weight exports at contract end (even if only on request)
- On-prem fine-tuning as a first-class deployment option
- Data boundary guarantees that pass legal review in pharma and finance
- Model cards that stay with the customer — not locked inside the vendor's audit system
The ones that won't change will watch procurement move their RFPs elsewhere.
What to Do If You're Buying
If you're evaluating enterprise AI in 2026, three questions separate the rentals from the assets:
1. "What do we walk away with at contract end?" The answer should be: the weights, the training data, the evaluation harness, and the production workflow configuration.
2. "Can we run this on our own infrastructure?" Data sovereignty is going to become a regulatory hammer in pharma, finance, and defense. Rentals can't answer this.
3. "Whose model is this, legally?" Every enterprise contract should have a clear answer. In 2026, "ours" is the answer procurement teams are actually pushing for.
What to Watch
The teams that figured this out in 2024–2025 are already a year ahead. The teams that figure it out in 2026 can still catch up — open-weight quality is compounding faster than the enterprise procurement cycle.
The teams that don't figure it out will spend 2027 explaining to their board why their production AI workflow disappeared when a vendor repriced.
You can keep the weights. In 2026, it's the only enterprise AI strategy that survives.