Back to Work
Performance Strategy · AI Diagnostic

Training Needs Analysis AI

A diagnostic tool designed to determine whether training is the right solution, identify root causes, and model business impact before development begins.

Discover
Design
Build
Measure
6 Frameworks ID Theory Applied
Solution Alignment Training or Non-Training
Repeatable Process Standardized Intake
Cost Avoidance Build Feasibility Modeled
“Training was being requested without a validated problem. No root cause, no consistent intake process, and no way to know if training was the right solution.”

Stakeholders defaulted to training as the answer, even when the issue was rooted in a process gap, unclear expectations, or an environmental barrier. L&D was expected to deliver solutions without a standardized way to evaluate the request first. Decisions were being made on assumption rather than evidence, and there was no consistent method to assess business impact, cost, or feasibility before development began.

The result was that training got built for problems training couldn’t solve. Time and resources went into programs that moved completion metrics but didn’t change performance. The conversation needed to shift from “what should we build” to “should we build anything at all.” That required a different kind of tool than a course.

Primary Users
Instructional designers · Performance consultants · Business stakeholders
Built With
HTML · CSS · Vanilla JS · Anthropic API · Claude AI
Output Formats
In-tool analysis · Shareable PDF report · Stakeholder briefing
Responsibilities
Tool design · Scoring logic · Framework mapping · Full development · QA

A structured diagnostic, not a form.

The tool runs a 19-question structured intake interview with 9 conditional follow-up questions that activate based on the user’s answers. It is conversational by design. Questions surface evidence, observable behaviors, business goals, and constraints, then feed into three independent scoring engines that each evaluate a different dimension of the request.

Training Recommendation Score

Scores the request against gap type, evidence quality, motivation signals, environmental barriers, and prior training outcomes. Starts from a neutral baseline and adjusts based on what the data supports. Compliance and regulatory signals trigger a floor to prevent false negatives on required training.

ROI and Build Feasibility Score

Evaluates whether building is viable given the timeline, budget, SME availability, content stability, and stakeholder risk factors. A project can score high on training need but low on feasibility, and the tool surfaces both without conflating them.

Data Confidence Score

Tracks the quality and completeness of evidence behind the scores. High confidence means the recommendation rests on multiple corroborating sources. Low confidence flags where data collection should happen before any development decision is made.

Modality Recommendation

A decision tree that routes to a recommended delivery method based on gap type, learner characteristics, geographic dispersion, urgency, and budget. The recommendation is suppressed when the verdict is Do Not Build or when data is insufficient, so the tool never produces a modality suggestion it cannot support.

Analysis ready to share with a stakeholder.

Six instructional design frameworks run behind every analysis: Mager and Pipe, Kirkpatrick, Bloom’s Taxonomy, Merrill’s First Principles, Knowles/Andragogy, and Action Mapping. The tool applies them simultaneously so no single framework drives the outcome. Results surface across three output tabs.

Tab 01
Executive Briefing
Strategic summary written for a VP or director-level audience. Covers verdict, confidence level, recommended approach, and the most important factors that drove the recommendation. Designed to be shared directly without translation.
Tab 02
Full Diagnostic
Six-framework analysis with a data gap inventory, conflict detection, cost model, and a transfer plan that spans Kirkpatrick levels 1 through 4. Covers Day 0 pre-training through Day 60 to 90 sustainment planning.
Tab 03
Action Plan
Six to eight prioritized action items with owners and timelines. References the specific intake data behind each recommendation so actions are traceable to evidence, not assumption.
Pushback Chat + PDF Export

After the analysis generates, a pushback chat allows users to challenge findings, correct assumptions, and apply overrides. The tool detects correction intent and re-runs the affected scoring functions before refreshing all three tabs. Every session can be exported as a formatted PDF report and reimported later, so analysis is never lost between conversations.

“The hardest part of this project wasn’t building the tool. It was designing something that could tell a stakeholder their training request was wrong.”

The scoring logic had to be defensible, not just functional. Any result the tool produced needed to be explainable in a room with a director or VP who came in expecting a yes. That required careful decisions about what each scoring function owned, how evidence quality was weighted, and where penalties applied so no single factor could distort the outcome. Every score is accompanied by the specific reasons that drove it. Every verdict surfaces the factors that shaped it.

The design principle behind every decision was that this tool should be an asset in a stakeholder conversation. That meant the output could not be a black box that produced a number. It had to be something an L&D professional could walk into a meeting with and defend, line by line, because the person across the table had already decided what they wanted the answer to be.

Try the diagnostic.

The demo below runs two pre-loaded scenarios: a customer service onboarding knowledge gap with a Strong Build verdict, and a warehouse system migration where training is not the right solution. Both show the full analysis flow, including scoring, framework output, and the recommended action plan.

Training Needs Analysis AI — Interactive Demo Open Full View ↗

Demo version. Two curated scenarios are pre-loaded. No API calls are made and no data leaves your browser. The production tool connects to the Anthropic API for live AI-generated analysis.

Training Needs Analysis AI — Workflow Overview Open Full View ↗

Screenshots from a live session showing the full intake flow, scoring results, and PDF report export.

From assumption to evidence.

Standardized Intake
Replaced inconsistent stakeholder intake with a structured, repeatable diagnostic process. Every request goes through the same 19-question analysis before a development decision is made.
Non-Training Identification
Enables explicit identification of non-training solutions when the root cause is a process gap, environmental barrier, or motivation issue. The verdict logic is designed to surface this, not obscure it.
Stakeholder-Ready Output
Generates a complete PDF analysis report including training recommendation, business impact modeling, cost estimate, and a Kirkpatrick-aligned transfer plan. Ready to share without additional preparation.

Curious how this kind of thinking shows up across all my work? View Approach →