
Why SMEs need a tighter due-diligence process
SMEs get sold a comforting story: the vendor has a famous model partner, a decent-looking security page, and a SOC 2 report, so the risk must be manageable. That's not due diligence. That's borrowed confidence.
The buyer's real problem is simpler. You need to know what data enters the tool, what the tool is allowed to do, what subcontractors sit behind it, how often the model or workflow changes, and whether you can get enough evidence to defend the purchase later.
I would rather see a small company run a sharp 25-question review than a bloated questionnaire nobody reads. Short and serious beats long and ceremonial every time.
Practical lens: if the answer changes your approval decision, keep the question. If it only makes the spreadsheet feel sophisticated, delete it.
The 25 questions
| Category | Questions to ask |
|---|---|
| Data handling | 1) What customer or employee data enters the system? 2) Is the data used for model training by default? 3) Can we disable retention? 4) Where is data stored and processed? 5) Can we segregate tenants or environments? |
| Model and change control | 6) Which base models are used? 7) How are model updates announced? 8) Can model or prompt changes affect our output materially without notice? 9) Do you version system prompts or workflows? 10) What rollback options exist? |
| Security and access | 11) What authentication methods are supported? 12) Do you support RBAC and audit logs? 13) What admin actions are logged? 14) How are API keys stored and rotated? 15) What is your incident notification timeline? |
| Tool and action risk | 16) Can the product call external tools or take actions autonomously? 17) Which actions can be restricted or approval-gated? 18) Is there a kill switch or emergency disable feature? 19) Can we restrict file, email, browser, or code-execution capabilities? 20) Do you maintain an allowlist model? |
| Assurance and contracts | 21) What independent assurance artefacts can you provide? 22) Which subprocessors are involved? 23) What happens to our data on termination? 24) What contractual commitments exist around training, deletion, and breach notification? 25) Can we review meaningful audit evidence beyond a marketing trust page? |

How to score the answers without fooling yourself
Don't score vendors on polish. Score them on control substance.
- Green: clear answer, documented control, evidence available, contract aligns.
- Amber: partial answer, roadmap promise, evidence only after purchase, or contract language still vague.
- Red: evasive answer, no evidence, no configuration control, or commercial pressure to accept blind spots.
A red on training defaults, audit logging, or emergency disable capability should stop the deal unless there is a compelling risk acceptance with executive sign-off. SMEs do not have the staffing depth to absorb hidden control debt later.
Five procurement red flags worth taking seriously
- "We can't share that until after signature." Sometimes reasonable. Often a sign that the evidence won't impress you.
- "Our model provider handles security." That does not answer how this vendor configures, restricts, or monitors the product you are actually buying.
- "We update continuously." Fine. Then explain your change control, release notes, and customer notification process.
- "There is no customer-facing kill switch, but support can help." Too slow for anything material.
- "We don't really have subprocessors; it's mostly cloud and models." That is exactly what subprocessors are.
What you should retain after approval
Once procurement is done, keep more than the signed contract. Retain the completed questionnaire, risk rating, exceptions, compensating controls, approved use case, vendor artefacts reviewed, and the review date. Otherwise the next audit will feel like archaeology.
That retention package is also how you avoid repeating the same review six months later when the vendor asks for expansion into a new use case or a new data set.
SME bias to challenge: "We'll review it properly after the pilot." No, you won't. By then the business team will already be dependent on it. Do the uncomfortable questions before the workflow becomes sticky.
FAQ
Is SOC 2 enough for AI vendor approval?
No. It may help on general controls, but it does not answer training defaults, model changes, tool permissions, kill switches, or workflow-specific evidence.
What matters most for agentic AI vendors?
Action boundaries, approval gating, audit logs, emergency disable options, subprocessors, and clear change control. Those are the controls that determine operational survivability.
How often should SMEs re-review AI vendors?
At least annually and whenever the vendor adds new models, new agentic capabilities, new subprocessors, or new data uses. Big functional changes should trigger an immediate review.