Diagram showing where Field IQ fits in the post-mission evaluation workflow: existing exercise data is captured, Field IQ structures post-mission findings, OCs and evaluators review evidence-backed findings for AAR and post-exercise assessment, and leadership reviews and decides. Includes note that Field IQ is post-mission only, supports evaluator judgment and mission command, does not automate grading, and does not issue operational directives.

Who This Helps

Designed for teams responsible for post-mission evaluation, AAR, and readiness review in training, certification, experimentation, and test environments.

Use Cases

Post-mission evaluation support for test, training, and review. Evidence-backed findings. Supports evaluator judgment and mission command.

Field IQ supports teams responsible for explaining what happened after a mission, test, or exercise using data they already collect. These use cases align to real evaluation workflows where a narrow pilot can produce value quickly without replacing existing systems.

Training Evaluation and AAR

OWNER
Training director / Evaluator lead / Instructor cadre lead
OUTPUTS
  • Evidence-backed findings for evaluator review
  • Comparable outputs across runs or cohorts
  • Plan-relative comparison when plan data is available
so what

Improves evaluator consistency and reduces AAR subjectivity by producing standardized, evidence-backed review outputs across repeated training events.

PILOT WEDGE (first step)

One course block, 5–10 sessions, validate evaluator agreement and reduction in stitching time.

Test & Evaluation / Range Ops

OWNER
Test Director / Range Ops Lead / T&E Support PM
OUTPUTS
  • Plan-relative findings when mission plan or route geometry is available
  • Movement-pattern findings, dwell events, and data-confidence indicators
  • JSON outputs for integration, with optional PDF review extract
so what
Faster, more defensible post-test reporting by turning existing exercise data into review-ready findings instead of manual reconstruction.
PILOT WEDGE (first step)

One test profile, one export format, 5–10 runs, one evaluator loop to validate time-to-first-finding.

Defense Experimentation Programs

owner
Lab experiment lead / OTA PM / experimentation cell lead
outputs
  • Repeatable post-run findings across trials
  • Plan-relative comparison when plan data is available
  • Evidence-backed findings for cross-run comparison
so what

Supports more consistent comparison across trials by producing repeatable post-mission findings under a controlled doctrine/configuration posture.

pilot wedge (first step)

Sprint on 1–3 runs to validate outputs and define the pilot scope for larger experimentation.

Autonomy Test & Evaluation

OWNER
Autonomy test cell / Program manager / Prime integrator lead
OUTPUTS
  • Post-run findings on deviation, pacing, dwell, and data confidence
  • Plan-relative comparison when plan geometry is provided
  • JSON outputs for integration into existing review tools
so what

Turns post-run review into repeatable, evidence-backed evaluation outputs that reduce reconstruction time and support more defensible assessment.

PILOT WEDGE (first step)

Start with 1 mission profile and 1 log export converted to CSV; run 3–10 trials; deliver repeatable evaluation artifacts.

Field IQ is post-mission and read-only. It supports evaluator judgment and mission command without issuing operational directives or automated adjudication.