Are We Still Testing, or Are We Just Measuring?

The Data Trap in Elite Sport


“Every morning at 8:00 a.m., we do CMJs on the force plate.” “Why?”, “Because… we always have.” Sound familiar?

In elite sport, testing is everywhere. Athletes fill out daily wellness questionnaires, step on force plates, run through jump and sprint diagnostics, and wear GPS devices that log every movement. The data flows endlessly into dashboards and athlete management systems. But here’s the uncomfortable truth: Most of what we call “testing” is just measuring. Testing vs. Measuring: A Critical Distinction. Let’s get specific. Measuring is collecting data. Testing is using that data to answer a question or influence a decision. For example:

📉 Measurement: Logging daily countermovement jump height via force plate.
📈 Testing: Using that jump data to modify gym load or adjust training when signs of fatigue appear.

The difference isn’t semantic; it’s strategic. Without a clear purpose or decision attached, measurement becomes:

Surveillance (best case)

White noise (likely case)

Distrust-building busywork (worst case)

🔁 The Feedback Loop:

Is Your Testing Working? Real testing requires more than data collection. It requires decision-making and evaluation. Ask yourself:

Did the data lead to a change in programming?

Was that change beneficial?

Are you reviewing those decisions regularly?

If not, you’re not testing, you’re just recording.

🧱 Common Pitfalls in Elite Sport Testing


No clear question: Testing just because it’s available, without clarity on what you’re trying to learn.

No actionable thresholds: Collecting metrics without defining red flags or cut-off points for intervention.

Poor reliability: Using tools or protocols that aren’t valid or sensitive enough for high-performance contexts.

No feedback loop: Making a decision but never checking if the result justified the action.

🧪 Real-World Example: Testing vs Measuring in Practice


A pro football team runs daily CMJs on a force plate but never alters training based on the results. Players hit the same gym program, regardless of what the data shows. One season, they pause the jump testing for six weeks due to equipment failure. No difference is observed in performance, fatigue management, or injury risk. That wasn’t testing. It was just measuring. Now contrast this with a team that uses individualised jump height thresholds. When a player dips >10% below baseline for 3 consecutive days, sprint exposure is modified. Over a season, they observe fewer soft-tissue injuries and better late-season sprint metrics. That is testing.

🧭 What Should We Be Asking Before We Test?


✅ What are we trying to detect?
(e.g. fatigue, readiness, asymmetry, risk)

✅ What will we do with the information?
(e.g. change training, adjust warm-up, modify match-day role)

✅ Will the athlete or coach change behavior based on the result?

✅ Is the test sensitive and reliable enough to detect meaningful change?

If you can’t answer those, the test doesn’t belong in your workflow.

✅ Best Practice: Making Testing Matter

  1. Align with Performance Goals
    Make testing relevant to KPIs:

Force-velocity profiling in sprinters

Repeated sprint ability in football

Asymmetry detection in return-to-play scenarios

  1. Reduce Frequency, Increase Intent
    More data ≠ better decisions.
    Depth > repetition.
  2. Test in Context
    A reactive jump under fatigue may be more useful than a perfect lab test.
  3. Communicate Results Clearly
    Testing data must be understood by coaches and athletes.
    If the info doesn’t travel, it doesn’t matter.

The Feedback Loop – A 4-stage Process: Measure →Detect change →Modify program →Evaluate outcome → (loops back to Measure)

🔚 Final Thought


Testing should be strategic, not habitual. It should inform, not just record. As performance professionals, our job is not to gather the most data. It’s to ask the right questions, and use the fewest metrics that give the clearest answers.

🧾 References


McGuigan MR. (2017). Monitoring Training and Performance in Athletes. Human Kinetics.

Buchheit M. (2016). Chasing the 0.2. Sport Performance & Science Reports, 1(1).

Thorpe RT, et al. (2017). Monitoring fatigue status in elite team-sport athletes: Implications for practice. IJSPP, 12(Suppl 2), S227–S234. https://doi.org/10.1123/ijspp.2016-0434

Comments

Your email address will not be published. Required fields are marked *