Software Selection - A Practical Playbook for Confident, Defensible Decisions
- John Hannan

- 4 minutes ago
- 4 min read

Great software selection isn’t a beauty contest. It’s a structured way to turn business goals into evidence—so your team can choose a platform you can defend to finance, IT, and operations and still feel good about on go‑live day.
I’m an industrial engineer who’s spent 25 years leading selections and implementations across manufacturing, distribution, and life sciences. I’ve helped 70+ organizations make high‑consequence software choices and lived through ~100 go‑lives. The method below is the one I use when the stakes are real and “demo theater” won’t cut it.
Why software selections fail
Typical failure patterns:
Demo‑driven decisions - Picking the best presenter, not the best fit.
Vague criteria - “Must support integrations” (everything does) vs. precise, testable needs.
TCO blind spots - Ignoring implementation effort, internal time, managed services, and change management.
Weak evidence - No scripts, no scorecard, no traceability—leading to buyer’s remorse and audit pain.
The fix - Lock criteria up front, script your scenarios, score with weights, and demand artifacts (not just assurances).
My software selection framework
Workstream A — Strategy & Criteria
Problem framing - What outcomes matter (e.g., faster close, fewer chargebacks, higher fill rate), what must not happen, and who owns what.
Decision criteria - Functional fit, integration posture, security/compliance, scalability, roadmap, TCO, partner quality.
Weights - Align stakeholders on what matters before vendors present.
Workstream B — Evidence & Demos
Scripted, day‑in‑the‑life demos using your data and edge cases.
Proof of round‑trip (create → transact → post → report) for the processes that make or break your P&L.
Reference calls with similar companies, guided by a short question set (what broke, what they’d redo).
Workstream C — Commercials & Risk
TCO model - Licenses, implementation, iPaaS/ISVs, internal time, training, and run‑state support.
SOW clarity - PRICEFW (Portals/Reports/Interfaces/Enhancements/Conversions/Forms/Workflows), change control, environments, data migration scope.
Risk register - Owner, trigger, response plan—for selection and implementation.
The weighted scorecard
Assign 1–5 for each item, multiply by the weight, and total. Lock the weights before demos.
Criterion | Weight |
Process fit (day‑in‑the‑life scenarios) | 20 |
Integration & data (APIs, events, error handling, migration paths) | 12 |
Reporting & analytics (line‑level margins, auditability, disclosures) | 8 |
Security & compliance (roles, SOD, industry regs) | 8 |
Extensibility & roadmap (low‑code, upgrades, cadence) | 8 |
Implementation partner quality (references, staffing, methodology) | 12 |
TCO & commercials (licenses, services, run‑state) | 12 |
Change management & training approach | 8 |
Fit for scale (multi‑entity, multi‑site, performance) | 6 |
Vendor viability & support | 6 |
Tip: publish this matrix internally so everyone understands what “winning” looks like before the first demo.
Scripted demo scenarios you should require
Pick five scenarios that represent 80% of your risk and revenue. Examples:
Order‑to‑cash with a real exception (credit hold, pricing escalation, partial shipment, chargeback).
Plan‑to‑produce/fulfill (promise dates from constraints, substitutions, quality gates).
Procure‑to‑pay with landed cost (tolerances, accruals, 3‑way match exceptions).
Close & compliance (period locks, approvals, audit trails, disclosures).
Integration break/fix (retry, idempotency, and monitoring when an API/EDI message fails).
Require evidence - Transactions posted, reports run, logs shown.
Integration and data reality
Event posture - What events publish (order, ship, invoice, journal), webhooks vs. polling, retry/backoff, dead‑letter handling.
Migration - How historical balances, open transactions, and master data move—and who owns cleansing.
Environments - Number of sandboxes, refresh cadence, and how extensions survive upgrades.
Mini‑case (anonymized)
A multi‑site distributor entered selection convinced Vendor A was the favorite. We scripted five scenarios with their real pricing and EDI flows. Vendor A struggled to post a round‑trip without manual steps; Vendor B completed the flow, then showed monitoring and retries for failures. The scorecard flipped. The project went live on time because the selection artifacts became the UAT pack and cutover checklist.
Red flags
“We can show that after you sign.”
“Trust us, our API handles errors.” (no logs or retries in demo)
“We don’t publish roadmaps.”
“Customizations are how we do everything” (no configuration story)
“Don’t worry about data; we’ll handle it at the end.”
What you get if we partner
A defensible decision built on evidence, not enthusiasm.
Reusable artifacts—scripts, scorecard, TCO, risk register, SOW addenda—that roll straight into implementation.
A selection that feels like the first sprint of your program, not a throwaway exercise.
A strong software selection isn’t a beauty contest—it’s a controlled, evidence-based process that protects your business from costly missteps and sets your implementation up for success. If you use the scorecard, demo scripts, and evaluation patterns in this guide, you’ll see through polished demos and get to real capability. Whether you run the process internally or want help pressure-testing your path, the goal is the same: a defensible decision that stands up on go-live day and supports you for years.
FAQ
What is the ideal length of a software selection?
Six to eight weeks for mid‑market scope and a single business process (i.e. QMS, AR Automation) when criteria and scripts are prepared up front.
For ERP the selection will take four to six months due to the collaboration across the entire organization and the requirement for personalized demos.
How many vendors should we invite to demos?
Three. More dilutes attention and rarely changes the outcome. When more vendors are in play, use an RFP process to cut a long-list to a short-list without demos.
Do we always need a pilot?
Only when a process or integration presents outsized risk. Pilot lightly, then decide.











