Scoring Model
All scores are calculated server-side. This page reflects the current model in code — no manual editing required to keep it in sync.
Impact Score
Formula: cases_per_year × hours_per_case_saved
Unit: person-hours saved per year
cases_per_year is the total number of process occurrences per year at team level.
hours_per_case_saved is the total person-hours saved per occurrence —
if multiple people are involved per case, sum their hours.
people_impacted is recorded as a reach metric and displayed on each idea,
but is not part of the impact formula.
Raw impact is normalized to a 1–5 scale using the breakpoints below. If an impact_score_override is set on an idea, it takes precedence over the calculated value.
| Score | Person-hours saved / year | Meaning |
|---|---|---|
| 1 | 0 – 99 | Local fix — marginal saving |
| 2 | 100 – 999 | Small improvement — one team, noticeable benefit |
| 3 | 1,000 – 9,999 | Departmental impact — meaningful saving |
| 4 | 10,000 – 99,999 | Cross-site or major workflow improvement |
| 5 | 100,000+ | Transformative — global or near-global impact |
Priority Score
Formula: impact_score / effort_score
Effort score is entered manually (1–5). Priority score is the y-axis of the XY plot and the sort key for the ready-to-advance backlog.
Feasibility Score
Entered manually. Represents the realistic implementation horizon for this idea.
| Score | Label | Horizon |
|---|---|---|
| 1 | Theoretical | Multi-year, no clear path |
| 2 | Long horizon | 2–5 years |
| 3 | Medium term | 1–2 years |
| 4 | Near term | 3–12 months |
| 5 | Quick win | < 3 months |
Quick Win Flag
Calculated automatically. An idea is flagged as a quick win when all three conditions are met:
- Impact score ≥ 3
- Effort score ≤ 2
- Feasibility score ≥ 4
Quick wins are surfaced prominently at the top of the list view.
Calibration Examples
| Example | Cases/yr | Hours/case | People | Raw (person-hrs) | Impact Score | Note |
|---|---|---|---|---|---|---|
| Automate a statistical release report | 10 | 8 | 3 | 80 | 1 | 10 reports/yr, 8 person-hrs saved each (one analyst). People=3 reviewers informed. |
| Routine enzyme activity QC method | 500 | 2 | 8 | 1,000 | 3 | 500 runs/yr, 2 person-hrs saved per run. People=8 analysts who run it. |
| LIMS module rolled out across sites | 2000 | 6 | 50 | 12,000 | 4 | 2000 transactions/yr, 6 person-hrs saved per transaction across staff involved. |
| AI-driven QC release decision system | 4000 | 10 | 200 | 40,000 | 4 | 4000 batch decisions/yr, 10 person-hrs saved per decision. People=global QC staff. |