One of the main benefits of the Catapult Vision platform is the ability to integrate video and wearable data, providing crucial layers of context to analysis.
In this article we’ll look at examples that demonstrate the power of that integration and the extra levels of detail it can bring to our analysis.
Example 1: Goal Conceded
In this example, we can see on our timeline that one of our defenders’ periods of highest physical intensity was immediately followed by a goal being conceded (the blue tag on the timeline). When we watch the footage back it’s clear that he overcommitted to a tackle, leaving a gap in the defence and allowing the opposition to score.
With video alone we would have analysed the clip and discussed the positional play and decision making ability of the player in that situation. However, we would’ve missed the specific physiological context of the event.
When we combine the video with GPS data, we can see that the defender (Player 2) had just endured his hardest two minute period of the game and was making decisions under high levels of physical stress. This might then provoke different discussions around his conditioning or awareness of how to handle himself under that level of stress.
Example 2: Red Card
Later in the game, one of our midfielders (Player 1) experienced his two periods of greatest physical intensity less than five minutes apart. A few minutes later he is sent off for blatantly pulling an opposition attacker back to prevent in a 1v1 situation. Suffering from high levels of fatigue after a physically intensive few minutes, the player decided he was unable to keep pace with the attacker and so committed the foul.
Video analysis in isolation allows us to review footage and clips that others deem important, but can lead to false conclusions based on an incomplete set of facts.
By integrating physical data from wearable devices, we are able to better understand the specific context of each event and open up better discussions with athletes and coaches around performance.