As you may have inferred from my previous post on interactive, weighted passing networks, I'm a big fan of going beyond static visualizations.
Quantitative analysis is great, but I believe that we can amplify the gains of such analysis by being more adventurous with presentation.
This post in particular contains a *lot* of interactivity – much of it experimental – in an effort to probe new areas of the design space and hopefully spark productive discussions.

To motivate the rest of this post, consider Arsenal’s opening goal in a recent 3-1 win against Burnley:

After some intricate passing on the right, Mesut Özil slices the Burnley defence open with a ball through to Sead Kolašinac, whose timely cutback finds Pierre-Emerick Aubameyang for the finish. Thanks to @lastrowview, here's a neat top-down visualization of the same sequence of play:

Incredible vision and execution from Ozil finding Kolasinac's run, who assisted Aubameyang for the first against Burnley #Arsenal pic.twitter.com/lV2tWF91hR

— Last Row (@lastrowview) December 24, 2018

Of course, on paper the assist for this goal is given to Kolašinac. But as an analyst you might (rightly) ask where Özil's credit is. Where's the metric that can capture both, Özil and Kolašinac's contributions in a proportional manner?

**How should we divide up the credit between Özil and Kolašinac for creating this opportunity? Drag the slider to enter what you think!**

There are several existing quantitative frameworks you might want to use to approach this problem:

- You can look at
**assists**, but then contributions such as Özil's will go unnoticed in the numbers. - You can look at
**xGChain**, where the xG of the final shot (= 0.13 in this case) will be equally divided amongst every player involved in the play. Kolašinac, Özil, and even Aubameyang, Maitland-Niles, and Lacazette would all be credited with the same amount of xGChain here, which is not reflective of true contribution. A related quantity,**xGBuildup**, will divide up the xG equally amongst everyone who was involved*before*the assist (i.e. Özil, Maitland-Niles, and Lacazette), but this too suffers from the same problem. - You can look at the
**difference in xG**induced by each action in the buildup. This is better, but a threatening pass is not always one that goes to a good shooting position. For example, Özil's pass split the defence open, yet it wasn't received in a particularly good shooting position by Kolašinac. Rather, what makes Özil's pass special is that it puts Kolašinac in a position from where he can in turn easily create a good chance.

Building off the deficiencies of existing approaches, we would like a framework that can:

**Reward individual player actions**(passes, dribbles) in buildup play.- Operate on
**event-level data**, due to availability constraints. - Reward actions
**independent of the end outcome of the possession**(i.e. Özil's reward shouldn't depend on Aubameyang shooting or scoring). - Reward moving the ball not just into high-xG shooting positions, but also into
**'threatening' positions**that can in turn lead to high-xG shooting positions with high likelihood.

There is of course no single solution that is 'correct' here. As always, there's a trade-off between modelling complexity and accuracy. The purpose of this post, though, is to introduce one possible modelling approach, and walk through how it can be implemented and used to analyze buildup play.

Let's go through those requirements again, this time proposing and refining a solution as we go along:

**Reward individual player actions:**our model should assign a score to each player action (pass or dribble) based on how much it contributed to the buildup play.**Event-level data:**we do not have access to any player tracking data; we only have a list of sequential events along with basic attributes for each event, such as the player in possession, time elapsed in the match, start location, end location, etc.**Independence from end outcome:**each action should be assigned a score in isolation, disregarding what happened before and after it in the possession. As far as relevant input signals go, this effectively leaves us with just the start and end locations of the action. How can we assign a score based on just those? We can build off the 'difference in xG' approach and assign a value to every location on the pitch. Then, if a certain action resulted in the ball moving from A to B, the score for the action can simply be the value at B minus the value at A.**Recognize 'threatening' positions:**while assigning a value to every location on the pitch, we must look beyond xG. The value generated by xG assumes that we will shoot in the next action. Yet there are many locations from where scoring directly is hard, but it is easy to move the ball into other higher-xG areas. While assigning values to locations, we need to recognize these high-threat locations. In other words, xG allows us only 1 action (i.e. shoot) from the current position, while to value threat we must consider the possibility of stringing together multiple actions.

Having made these modelling assumptions, our problem is now more digestible: **given a repository of event-level data, can we assign a threat value to every location on the pitch?**

One simplified way of viewing buildup play is as follows: when a team has possession in a certain position, they can either shoot (and score with some probability), or move the ball to a different location via a pass or a dribble. This continues until the team either loses possession, or scores a goal.

If we run with this simplified model of buildup play, what does the data look like? From each position, how often do players shoot (and how often do they score?), how often do they move the ball, and where do they move it to? The following visualization aggregates data over a whole season (2017-18) of Premier League games, go ahead and explore how players behave by clicking on different zones!

After playing around with this view of the data, you should begin to see that every zone location \((x, y)\) has certain attributes:

**Move probability \(m_{x,y}\):**when a player has possession in zone \((x, y)\), how often do they opt to move (i.e. pass or dribble) the ball as their next action?**Shoot probability \(s_{x,y}\):**when a player has possession in zone \((x, y)\), how often do they opt to shoot as their next action? In our simplified universe, players can only either move or shoot, so by definition \(m_{x,y} + s_{x,y} = 100\%\).**Move transition matrix \(T_{x,y}\):**in the cases where the player moves from zone \((x, y)\), what is the probability that they move to each of the other zones? The visualization above shows these probabilities in shades of green.**Goal probability \(g_{x,y}\):**in the cases where the player shoots from zone \((x, y)\), what is the probability that the shot turns into a goal? Note that this quantity is essentially a very simple implementation of xG!

Now that we have some notation, let's recap what we're trying to do here. The problem with purely shot-based models like xG when it comes to analyzing buildup play is that many meaningful actions don't result in good shooting positions immediately, but rather lead to good shooting positions multiple actions later. This idea is put forth very eloquently by Cervone et al. in the context of basketball analytics (although this quote is surprisingly transferable to football).

"Despite many recent innovations, most advanced metrics remain based on simple tallies relating to the
terminal states of possessions like points, rebounds, and turnovers. While these have shed light on the game, they
are **akin to analyzing a chess match based only on the move that resulted in checkmate, leaving unexplored the
possibility that the key move occurred several turns before**. This leaves a major gap to be filled, as an understanding
of how players contribute to the whole possession – not just the events that end it – can be critical in evaluating
players, assessing the quality of their decision-making, and predicting the success of particular in-game tactics."

So how can we look beyond checkmate given the data that we have? How can we assign values to zones that reflect not just their immediate shooting value, but the future rewards they can bring (through movements of the ball to other zones)? The key intuition here is that when you have possession in zone \((x, y)\), you have a choice: you can either shoot and score with some probability, or you can move the ball to a different location. Given this background, we can formulate the problem as follows.

Let \(V_{x,y}\) be the 'value' that our algorithm assigns to zone \((x, y)\).

Now imagine you have the ball at your feet in zone \((x, y)\). You have two choices: shoot, or move the ball.

Based on past data, we know that whenever you shoot from here, you will score with probability \(g_{x, y}\). Thus, if you shoot, your expected payoff is \(g_{x,y}\).

Or, you can opt to move the ball via a pass to a teammate or by dribbling it yourself. But there's another choice to make here: which of the 192 zones should you move it to? Say you choose to move the ball to some new zone, \((z, w)\). In this case, your expected payoff is the value at zone \((z, w)\), i.e. \(V_{z, w}\). But this was just one of the 192 choices that you had; how can we compute the expected payoff for all of the 192 choices in totality? Here's where the move transition matrix \(T_{x,y}\) comes in: based on past data, we know where you're likely to move the ball to whenever you're in zone \((x, y)\), so we can proportionally weight the payoffs from each of the 192 zones. Specifically, for each zone \((z, w)\), the payoff is \(T_{(x,y)\rightarrow(z,w)} \times V_{z,w}\), i.e. the probability of moving to that zone times the reward from that zone. To get the total expected payoff for moving the ball, we must sum this quantity over all possible zones: $$\sum_{z=1}^{16} \sum_{w=1}^{12} T_{(x,y)\rightarrow(z,w)} \times V_{z,w} $$

Finally, let's piece it all together. We computed the payoff if you shoot as \(g_{x, y}\), and the payoff if you move the ball as \(\sum_{z=1}^{16} \sum_{w=1}^{12} T_{(x,y)\rightarrow(z,w)} \times V_{z,w}\).
Based on past data, we know that you tend to shoot \(s_{x,y}\) percent of the time, and you opt to move the ball \(m_{x,y}\) percent of the time.
Therefore, let's weight these two outcomes based on the probability of each of them happening, to obtain our final value for zone \(x, y\):
$$V_{x,y} = (s_{x,y} \times g_{x,y}) + (m_{x,y} \times \sum_{z=1}^{16} \sum_{w=1}^{12} T_{(x,y)\rightarrow(z,w)} V_{z,w})$$
This quantity looks beyond the checkmate; it values locations based on not just the immediate shooting threat, but the potential to induce danger later in the possession sequence.
It is inherently designed to capture a notion of 'threat', so **'Expected Threat' (xT)** seems like an apt name for it.
Putting it all together with the updated variable name, we get the following equation:
$$\boxed{\texttt{xT}_{x,y} = (s_{x,y} \times g_{x,y}) + (m_{x,y} \times \sum_{z=1}^{16} \sum_{w=1}^{12} T_{(x,y)\rightarrow(z,w)} \texttt{xT}_{z,w})}$$

Unfortunately that formula on its own is buggy without one additional detail. If you look at the formula carefully, you'll see that it is flawed since
computing the xT value for some zone \((x, y)\) requires that we *already* know the xT value for all the other zones. But all the other zones also suffer from the
exact same flaw, so this forms a cyclic dependency that we can't resolve easily!

Fortunately, in practice there is a neat workaround.
All we need to do is start off with \(\texttt{xT}_{x,y} = 0\) for all zones \((x, y)\), and evaluate this formula not once, but **iteratively until convergence**.
During each iteration, we evaluate the new xT for each zone by using xT values from the **previous iteration**.
Empirically, I found 4-5 iterations to be sufficient for reasonable convergence, though this may vary based on your dataset.

Besides breaking the cyclic dependency and leading to convergence, this process comes with another added benefit: interpretability. Let's take a step back and think about what happens at iteration 1. At this point, we are using our initialization of xT = 0 for all zones. Here's what happens to our xT formulation: $$\texttt{xT}_{x,y} = (s_{x,y} \times g_{x,y}) + (m_{x,y} \times \sum_{z=1}^{16} \sum_{w=1}^{12} T_{(x,y)\rightarrow(z,w)} \texttt{xT}_{z,w})$$ $$\texttt{xT}_{x,y} = (s_{x,y} \times g_{x,y}) + (m_{x,y} \times \sum_{z=1}^{16} \sum_{w=1}^{12} T_{(x,y)\rightarrow(z,w)} \cdot 0)$$ $$\texttt{xT}_{x,y} = (s_{x,y} \times g_{x,y})$$ While not exactly xG, you can think of this as a value that represents how good a shooting position \((x, y)\) is. In other words, after iteration 1, we essentially have an xG model! An alternative way to think about this is that at iteration 1, we are only allowing the checkmate: we are valuing positions as though shooting was the only option, and passing and dribbling did not exist.

Now, in the second iteration, the new xT computation will use the xT values that were computed in iteration 1. At this point, the 'move' term in the formula will no longer be 0. This effectively means that we are now considering the possibility of "move, then shoot" in addition to just "shoot". We are now looking one move before the checkmate.

The same logic can be extended for multiple steps; for example, in the third iteration, we are additionally considering the possibility of "move, move, shoot" and looking up to two moves before the checkmate. This idea is powerful because it lends a very interpretable meaning to xT. Rather than being a score on an arbitrary scale, it has a very natural meaning (just like its distant cousin, xG). Specifically, \(\texttt{xT}_{x,y}\) at iteration \(n\) represents the probability of scoring within the next \(n\) actions.

Now that we have a way to find xT across the pitch, what does the end result look like? The visualization below shows a 2D as well as a 3D representation of the value surface generated by xT, using events across all the matches of the 2017/18 Premier League season. Use the slider to view the xT at different iterations within the algorithm, and hover/click to change zones on the pitch!

As you step through the successive iterations, it's worth noticing some interesting things:

- At iteration 0, the map is flat since we initialize xT = 0 for all zones to begin with.
- At iteration 1, we have effectively computed an xG model.
- At each subsequent iteration, you can see the xT spread to areas further away from the goal (because, as explained above, each iteration essentially allows us to account for one more action in the buildup play).
- The xT values begin to converge (to a reasonable degree) after 4-5 iterations.

Zooming out a bit, the point of xT was to come up with a metric that can quantify threat at any location on the pitch.
Now that we have xT, we can value individual player actions in buildup play by computing the difference in xT between the start and end locations.
In other words, we will say that an action that moves the ball from location \((x, y)\) to location \((z,w)\) has value \(\texttt{xT}_{z,w} - \texttt{xT}_{x,y}\).
Once again, there is a nice interpretable meaning to this: the value of an action is equal to the % change in the team's chances of scoring in the next 5 actions due to the action (note that here we're using the xT computed after 5 iterations, hence 'next *5* actions').

Now, let's try answering the Kolašinac-Özil credit assignment problem from before using the xT framework:

- Özil's pass takes the ball from xT = 0.077 to xT = 0.158.
**Difference in xT due to Özil = 0.081**. - Kolašinac's pass takes the ball from xT = 0.158 to xT = 0.171.
**Difference in xT due to Kolašinac = 0.013**.

As a sanity check, let's also look at the top xT creators during the 2017/18 Premier League season.
The table below shows the top 15 players in the league whose actions created the **highest cumulative change in xT**.
Note that this is not normalized by the number of actions taken – it is based on the raw sum of xT created.
This is intentional, because it surfaces players who not only know how to create danger, but those who do it consistently at a high volume.
The inclusion of Holebas at #3 might surprise you, but the left-back has established himself as Watford's most consistent and most dangerous creator.

Rank | Player | Team | xT Created |
---|

Besides simple credit assignment in buildup play, the xT framework opens the door for a host of other applications.
For example, so far I've only shown xT results using an entire season's worth of Premier League data: but of course, this means we lose team-specific information.
There's no doubt that teams behave differently in possession, prioritizing different areas of the pitch and exploiting different paths to goal based on their strengths (and weaknesses).
What happens if, instead of clubbing all the Premier League teams into one analysis, we **compute xT on a per-team basis**?

Sure enough, we do see a lot of variance across different teams. In addition to changes in the shape of the xT curve, note the differences in height. For instance, the shape of Manchester City and Spurs' curves are similar (which means they value the ball in similar areas of the pitch), yet their xT magnitudes are very different. This tells us that given the ball in the same position, City are much more threatening than Spurs (due to their higher conversion rate of possessions into goals).

While these per-team xT maps are interesting to look at, they're not very actionable on their own.
That being said, the underlying data is powerful because it can give us a team-specific view into how danger is created through buildup play.
For example, one useful question to answer during pre-match analysis might be: **where on the pitch do our opponents tend to create the most danger from?**

To answer this, we can use our opponent's xT map to value all of their actions from past matches, and aggregate these values based on the start location of the action.
In other words, for each grid location, we can look at the actions that *originated* there, and sum the xT created by these actions.
This will give us a per-location cumulative value that will highlight the amount of danger created from different areas of the pitch.
Additionally, by highlighting the common end zones of actions starting in a particular zone, we can start to see our opponent's most dangerous passages of play.
To make this even more useful for tactical preparation, we might want to also know *who* are the players that are responsible for creating threat through these passages.

The visualization below attempts to answer precisely these questions. The green map shows zones from where maximum xT is created. Hovering/clicking on a zone will show you the dangerous passages of play that originate there, as well as the players most responsible for them. Use the dropdown menu to switch to any other Premier League team!

These were just a couple of applications of xT, and there are many more left to explore.
The ability of xT to capture the on-the-ball behaviour of teams leads to several promising directions.
Looking at how xT changes during the course of a possession sequence may help us, for example, in **identifying and analyzing patterns of play** such as counter-attacks.
At the player level, we can assess an individual **player's decision-making** relative to how his team tends to play: "Is this player making high-reward passing choices given his team's xT profile? Would he be better off shooting rather than dribbling in certain areas?"
Perhaps even more interesting is answering similar questions in the context of **player scouting**: "Can we tell if this player, who has never played for us, will fit into our system? Does he have a history of creating actions that will lead to high xT gains for us?"

If there are any directions that you're particularly excited about or want to explore together, please let me know! I'll continue to explore the limits of xT and will publish relevant results on my Twitter and on this blog.

I'd love to hear any thoughts or feedback you might have; please reach out to me on Twitter at @karun1710 or via email at karun.singh17@gmail.com!