Catapult Fundamentals: Evaluating the quality of performance data

By its very nature, data in sport is inherently noisy. As technology evolves, and more data is generated, it is important that we quantify the boundaries of this noise (variability). Once the boundaries of noise are defined, we can then have increased confidence on judgement calls made when observations lie outside of these boundaries.

Fundamentally, the confidence we can have in systems and data will be determined by their reliability and validity. This article explores how that high level of confidence might be achieved in an applied sporting environment. 

Reliability

Reliability refers to the extent to which a tool or technique produces consistent results. In essence, it deals with the repeatability of findings. For example, if a particular study was conducted several times, would it yield the same results each time? If it did, we could say the data, or the instrument which generated the data, was reliable.

In the specific case of GPS technologies, we know that linear measures of low velocity locomotion are more reliable than multi-directional measures of high velocity locomotion. When working with athlete monitoring systems, it is crucial to establish the reliability of the technology and each of the metrics it generates before you begin to make any decisions based on the data derived from it.

Validity

Validity relates to the extent to which a device measures what it claims to measure. To go into slightly more detail, there are two fundamental aspects to validity:

Internal Validity: Do technologies and processes accurately measure what they were intended to measure?

External Validity: Can information gathered from one context/scenario be generalised to apply to other scenarios/athletes?

For your data to be valid, it must first be reliable. In other words, if technology doesn’t measure what it purports to measure, by definition it cannot produce consistent, reliable results.

Assessing Reliability/Validity of Athlete Monitoring Systems

As the use of athlete tracking technologies has become increasingly widespread, the academic community has focused a lot of attention on scrutinising and quantifying the reliability and validity of data generated

Reliability and validity of data can be situation and environment dependent. As such, practitioners are advised to conduct in-house tests within their own workspace (e.g. standardised runs) to quantify the confidence they can have in the data that is generated. These tests are unlikely to be as rigorous as those conducted by academic institutions, but they can give you a useful perspective on your systems and can inform some of the processes you put in place.

Interested in finding out how Catapult can help your team find its competitive edge? Click here to learn more about our range of athlete monitoring technologies.

Ready to Gain a Competitive Edge?