A Decision-Making Framework

 

The hardest part of using an algorithm to guide decision-making is actually using the model when deciding! One of the most common failure modes I’ve personally fallen victim to when making picks is what’s called confirmation bias. Our brains are predisposed to seek confirming evidence because that expends the least amount of energy, the path of least resistance. Next time someone tells you “yes” compare the subtle feeling you get against how you feel when someone tells you “no”. “Yes” is calming, lowering your defenses. There’s a reason the business negotiators want to “get to yes”. Hearing “no” on the other hand triggers a fight or flight stress response. You telling someone “no” is a power move.

Overcoming confirmation bias, like any bias, is difficult and it takes a concerted effort. Bookmakers know this and exploit it. That’s why taking the road less traveled, going against the grain, offers the greatest reward. Using a decision-making framework (i.e., a process) will help you stay on the “straight and narrow”.

To win you must: 1. Outperform 2. Differentiate.

Using the model to make a decision is not as easy as it seems. Ideally, you’d always get a clear buy/sell signal. However, most often signals conflict one another so you have to decide which signals carry more weight, which compounds the uncertainty. To illustrate my point, let me explain the quandary I often encounter and the process I use.

First the easy part: I update my algorithm which returns an implied point spread. Then, I compare my number against the consensus estimate, which is a calculated average of other models and the Vegas market line. Like any human subject to decision-making bias, I get a warm-and-fuzzy feeling inside when they agree. When they don’t, I react with furrowed eyebrows and instinctively find my confidence shaken in the model’s predictive ability.

Take the following two scenarios from Week 10 of the 2019 NFL season.

  • Game 1: My algorithm favored the Kansas City Chiefs by 1.3 (win probability = 54%) on the road, at the Tennessee Titans. The consensus estimate showed KC -6 (67%). The SuperContest line we were playing against offered KC -3.5 (62%). Decent algorithm value on TEN +3.5.

  • Game 2: Algorithm showed the San Francisco 49ers by 1.5 (55%) at home against the Seattle Seahawks, but the consensus suggested -8.5 (76%). The SuperContest had SF -6.5 (~70%). Heavy value on the SEA +6.5—in fact, it was the #1-rated raw algorithm play that week.

What sides would you have picked if given these two games?

Allow me to walk through my thought process. With the first game, KC-TEN, my algorithm suggested value on TEN but the consensus disagreed. It strongly preferred KC. Even though my model accounted for the return of the reigning MVP, Patrick Mahomes, everyone seemed to be all over KC that it scared us off and we didn’t officially play the KC-TEN game (though we strongly considered playing TEN). Turns out TEN not only covered but won outright!

In the second game, once again my algorithm preferred the underdog, SEA, while the consensus estimate thought SF would win handily. Using the Pythagorean win expectation calculation, I thought SEA was overdue for a regression to the mean so I discounted the algorithm’s suggestion. We picked SF and paid the price as SEA won 27-24 in OT.

Every week I encounter situations like these. Sometimes the signals all point in the same direction; other times it’s like a sail boat on a windless lake. There are many strategies you could take to navigate uncertain situations. The strategy I suggest is to build a rules-based, decision-making framework to rationalize and prioritize the algorithm selections. (Ray Dalio would agree.)

Very early on in my algorithm days, for confidence point-based leagues, I had set a rule prohibiting me from assigning the highest tier confidence points to teams on the road, regardless of their pre-game win probabilities. So even if it was 11-2 New England Patriots against the hapless 0-13 Cincinnati Bengals I would assign only the second-highest tier points to NE, for example. This proved rather simple and effective, until the marginal statistical benefit I got from that rule was overcome by the competition in subsequent seasons.

As you would expect, my rules-based framework has grown in complexity. Underperforming rules have been jettisoned for better ones. One rule I still (try to) follow I learned from Colin Cowherd: that is, to “take the obvious game off the board”. Following this rule is what dissuaded us from taking a side on the KC-TEN game as I outlined above. Remember, the SuperContest lines are static, so even as the market sentiment shifts, the contest lines are immutable. As it turned out, a strong majority of our competitors selected KC with the reasoning that with KC’s QB, Patrick Mahomes, returning from the injury list, -3.5 was easy money. They were wrong! While we didn’t get the added satisfaction of winning at the expense of everyone else, their loss had a second-order benefit to our place in the standings. By following one of our rules, going against the grain, and avoiding this game, we differentiated our picks against the competition.

Building a rules-based framework on top of an algorithm foundation is a critical step to improve decision-making performance.