Skip to content
Search

SC/Logistics KPI Library

Last updated: 2026-04-30 | Total KPIs: 14

Quick filter: [[#Intralogistics]] | [[#Transportation]] | [[#Inventory]] | [[#Procurement]] | [[#Customer Service]] | [[#Financial]]

Benchmark tiers: BIC = best-in-class | Med = median | Target = design or program target | Ceiling = upper design limit

Use /kpi-library [filter] to query. Use /kpi-library add to add a new entry. Use /kpi-library build to rebuild from wiki.


KPIFormulaUnitMedBICSourceRelated
Dock-to-Stock TimeStorage completion timestamp − Receiving scan timestamphr8–122–4 (with ASN/EDI 856)WERCInbound & Receiving
Order Cycle TimeShip timestamp − Order receipt timestamphr24<6WERC 2025Warehouse KPIs
KPIFormulaUnitMedBICSourceRelated
On-Time Shipment(Orders shipped on or before promised date ÷ Total orders shipped) × 100%94≥99WERCWarehouse KPIs
Order Accuracy(Orders shipped without error ÷ Total orders shipped) × 100%99.0≥99.9WERCWarehouse KPIs
Pick Accuracy(Lines picked without error ÷ Total lines picked) × 100%99.5≥99.68WERC 2025Warehouse KPIs
KPIFormulaUnitMedBICSourceRelated
Cost Per Order (CPO)Total DC operating cost ÷ Total orders shipped$/ordercontext-specificcontext-specificInternalLabor Modeling
Order Lines Per HourLines picked ÷ Direct labor hourslines/hr1035WERCWarehouse KPIs
Units Per Hour (UPH)Units processed ÷ Direct labor hoursunits/hrcontext-specificcontext-specificInternalLabor Modeling
KPIFormulaUnitTargetCeilingSourceRelated
AGV Fleet UtilizationActive task time ÷ Available operating time%7080M/G/1 queuing mathAMR Fleet Sizing
AMR Robotic Pick Success RateSuccessful picks ÷ Attempted picks%not documentednot documentedInternalAMR Fleet Sizing
AS/RS Storage UtilizationLocations occupied ÷ Total locations%8085Design heuristicReliability and Design Safety Factors
Sortation Throughput (nameplate derating)Design throughput = Nameplate CPM × 0.80CPM—80% of nameplateIndustry standardThroughput Math - Sortation

Expand via /kpi-library add or /autoresearch transportation KPIs OTIF freight cost per mile


Expand via /kpi-library add or /autoresearch inventory KPIs days on hand fill rate


Expand via /kpi-library add or /autoresearch procurement KPIs purchase price variance supplier OTIF


Expand via /kpi-library add or /autoresearch customer service supply chain KPIs


KPIFormulaUnitNotesSourceRelated
Total Cost per Sales OrderTotal end-to-end SC cost ÷ Total sales orders processed$/orderAPQC reports by percentile tier (bottom/median/top quartile/top decile); compare within industry verticalAPQC OSB/PCFSupply Chain Benchmarking Databases
KPIFormulaUnitNotesSourceRelated
Inventory TurnsCOGS ÷ Average inventory valueturns/yr5% weight in Gartner Top 25 composite; strategic framing, not operational diagnosticGartner Top 25Supply Chain Benchmarking Databases
ROPA (Return on Physical Assets)Operating income ÷ Physical assets%5% weight in Gartner Top 25 composite; use for supply chain strategy conversations, not DC benchmarkingGartner Top 25Supply Chain Benchmarking Databases

Every entry must have all fields populated. Write “not documented” — never leave blank — so gaps are visible and searchable.

FieldRule
FormulaExplicit numerator ÷ denominator. Not a description.
UnitWhat the computed number is measured in.
BenchmarkNamed source + year. Never “industry standard” without attribution.
RelatedAt least one wikilink to the concept page where this KPI is discussed in depth.

  • Order accuracy vs. line accuracy: Order accuracy is harder — one line error fails the whole order. Line accuracy can be 99.9% while order accuracy is 97%.
  • UPH and CPO are operation-specific: Meaningless without stating operation type (e-comm, B2B, manual, automated). Never compare across unlike operations.
  • Shift utilization is the most sensitive ROI input: Most automation payback models fail because utilization is assumed too high. Model at actual shift patterns, not theoretical capacity.
  • Nameplate ≠ sustained throughput: Design sortation and conveyor systems to 80% of nameplate CPM. Vendors spec peak; operations run average.
  • WERC vs. APQC vs. Gartner audiences: WERC = DC operations leaders; APQC = CFO/finance; Gartner Top 25 = C-suite/boards. Wrong tool for the wrong audience produces bad conversations.
  • Translate every benchmark gap to a financial figure: A gap between 99.0% and 99.9% order accuracy means nothing to an executive. Cost per error × annual order volume = the conversation that gets action.