American Football

Breece Hall, Analytics, and the Eye Test

on

NFL: New York Jets at New England Patriots
Brian Fluharty-USA TODAY Sports

What do analytics tell us about Breece Hall’s performance?

Evaluating football players used to be easy. You watched a guy play and you said, “Wow, he’s good” or “Oh, he’s bad.” Usually, whomever you said it to would give their opinion back. Sometimes that opinion would differ from yours, but, for the most part, at least for the truly great and the truly bad players, that opinion was shared and that was how people knew a player was generally good or bad.

Nowadays though? If you think a guy is good, then you better be ready to defend why with more than “well he looks good to me. Why? Because there’s a decent chance that whoever you say it to is going to find a statistic to fact check that, and I don’t think that’s a bad thing in and of itself.

However, I do think that has a lot to do with why some resist the use of analytics. You know what your eyes are telling you. If that statistic doesn’t align with that view, then you are going to think that statistic can’t be all that accurate. For example, if I told you that I had a statistic that showed New York Jets cornerback Sauce Gardner was among the worst cornerbacks in the sport, then you should immediately question whether my statistic has any grounding in reality. If that happened enough times then surely it would be reasonable to question if analytics on the whole are actually useful at all.

That isn’t a problem with analytics though. It’s just the use of a bad statistic, which is a different problem entirely. When we think about analytics, we should try to think about what it is intended to do when done correctly, which is to quantify what we’re seeing. Using another example, if I think a potential draft pick has a great vertical leap on tape then I can go look at his combine jump and see as much. If I do, then awesome, my eye test is confirmed, and I can trust what I saw on tape. If I look at his combine jump and it’s awful, then maybe that’s a sign that I need to go back and double check my eyes because something is off.

Regarding the Jets this season, one player I saw analytics used in support of and against is running back Breece Hall. Breece also makes a really fun case study for this because the eye test of any New York Jets fan who watched this season regularly saw him break off plays that looked like these:

Breece is good. We can all watch the tape and see that. We don’t really need to argue that unless we’re just looking to argue for the sake of arguing.

However, if we checked some statistics then we wouldn’t see support that he’s good. For example, these plots show Expected Points Added (EPA) per rush and rushing yards over expectation. The first of these is shown on the X axis wherein being further right reflects more expected points added.

On EPA per rush, Hall is pretty middle of the pick. He looks just about smack dab average actually. But while EPA is a good statistic of a rushing game, is it a good measure of a running back’s effectiveness more narrowly? I would argue not. EPA per rush is really a team statistic of how effective a play was for the team on the whole. A good running back who receives poor run blocking but gets back to the line of scrimmage could easily score low on this because they’re making a lot of a really awful situation but still ending up with a pretty bad outcome in the grand scheme.

For Hall, I think a lot of Jets fans would describe Hall’s season as “he was good, but the run blocking was bad.” So maybe another statistic could show that context. In this case, rushing yards over expected is built to do just that, since it provides an estimate of how many yards above the average running back a given player got on a given play. Calculate that for every carry and you’d have a pretty good idea of which running backs got the most out of a play on average.

In support of that eye test and the idea that the correctly chosen statistic should match an eye test, Hall scores very well on RYOE per rush which is depicted on the graph such that a higher score is better. Using this statistic, Hall ranks top 10 on this, with scores comparable to well regarding running backs such as James Conner and Kyren Williams instead of middle of the pack along with lesser players such as Najee Harris and Rhamondre Stevenson.

So, there you have it. Statistics can be good, but only when they are thoughtfully chosen. In this case, using a well aligned RYOE per rush measure seems to tell an accurate story of Hall’s effectiveness, which is a story we would have completely missed if we’d instead relied on a poorly aligned statistic like EPA per rush. This is something to keep in mind as analytics become integrated more and more into the NFL and its analysis.

You must be logged in to post a comment Login