A diagnostic tool designed to determine whether training is the right solution, identify root causes, and model business impact before development begins.
Stakeholders defaulted to training as the answer, even when the issue was rooted in a process gap, unclear expectations, or an environmental barrier. L&D was expected to deliver solutions without a standardized way to evaluate the request first. Decisions were being made on assumption rather than evidence, and there was no consistent method to assess business impact, cost, or feasibility before development began.
The result was that training got built for problems training couldn’t solve. Time and resources went into programs that moved completion metrics but didn’t change performance. The conversation needed to shift from “what should we build” to “should we build anything at all.” That required a different kind of tool than a course.
The tool runs a 19-question structured intake interview with 9 conditional follow-up questions that activate based on the user’s answers. It is conversational by design. Questions surface evidence, observable behaviors, business goals, and constraints, then feed into three independent scoring engines that each evaluate a different dimension of the request.
Scores the request against gap type, evidence quality, motivation signals, environmental barriers, and prior training outcomes. Starts from a neutral baseline and adjusts based on what the data supports. Compliance and regulatory signals trigger a floor to prevent false negatives on required training.
Evaluates whether building is viable given the timeline, budget, SME availability, content stability, and stakeholder risk factors. A project can score high on training need but low on feasibility, and the tool surfaces both without conflating them.
Tracks the quality and completeness of evidence behind the scores. High confidence means the recommendation rests on multiple corroborating sources. Low confidence flags where data collection should happen before any development decision is made.
A decision tree that routes to a recommended delivery method based on gap type, learner characteristics, geographic dispersion, urgency, and budget. The recommendation is suppressed when the verdict is Do Not Build or when data is insufficient, so the tool never produces a modality suggestion it cannot support.
Six instructional design frameworks run behind every analysis: Mager and Pipe, Kirkpatrick, Bloom’s Taxonomy, Merrill’s First Principles, Knowles/Andragogy, and Action Mapping. The tool applies them simultaneously so no single framework drives the outcome. Results surface across three output tabs.
After the analysis generates, a pushback chat allows users to challenge findings, correct assumptions, and apply overrides. The tool detects correction intent and re-runs the affected scoring functions before refreshing all three tabs. Every session can be exported as a formatted PDF report and reimported later, so analysis is never lost between conversations.
“The hardest part of this project wasn’t building the tool. It was designing something that could tell a stakeholder their training request was wrong.”
The scoring logic had to be defensible, not just functional. Any result the tool produced needed to be explainable in a room with a director or VP who came in expecting a yes. That required careful decisions about what each scoring function owned, how evidence quality was weighted, and where penalties applied so no single factor could distort the outcome. Every score is accompanied by the specific reasons that drove it. Every verdict surfaces the factors that shaped it.
The design principle behind every decision was that this tool should be an asset in a stakeholder conversation. That meant the output could not be a black box that produced a number. It had to be something an L&D professional could walk into a meeting with and defend, line by line, because the person across the table had already decided what they wanted the answer to be.
The demo below runs two pre-loaded scenarios: a customer service onboarding knowledge gap with a Strong Build verdict, and a warehouse system migration where training is not the right solution. Both show the full analysis flow, including scoring, framework output, and the recommended action plan.
Demo version. Two curated scenarios are pre-loaded. No API calls are made and no data leaves your browser. The production tool connects to the Anthropic API for live AI-generated analysis.
Screenshots from a live session showing the full intake flow, scoring results, and PDF report export.
Curious how this kind of thinking shows up across all my work? View Approach →