The first time I appeared on live television to discuss finance was on 15 August 2007. It was a guest appearance on CNBC’s Squawk Box program at the early stages of the 2007–08 financial crisis.
Of course, none of us knew at that time exactly how and when things would play out, but it was clear to me that a meltdown was coming — the same meltdown I had been warning the government and academics about since 2003.
I’ve done 1,000 live TV interviews since then, but that first one remains memorable.
When I was done, I was curious about how many guests CNBC interviewed over the course of a day.
Being on live TV made me feel a bit special, but I wanted to know how special it was to be a guest. The answer was deflating and brought me right down to earth.
CNBC has about 120 guests on in a single day, day after day, year after year. Many of those guests are repeat performers, just as I became a repeat guest on CNBC during the course of the crisis. But I was just one face in the midst of a thundering herd.
What were all of those guests doing with all that airtime? Well, for the most part they were forecasting. They predicted stock prices, interest rates, economic growth, unemployment, commodity prices, exchange rates, you name it.
Financial TV is one big prediction engine, and the audience seems to have an insatiable appetite for it. That’s natural. Humans and markets dislike uncertainty, and anyone who can shed some light on the future is bound to find an audience.
Which begs the question: How accurate are those predictions?
No one expects perfection or anything close to it.
A forecaster who turns out to be accurate 70% of the time is way ahead of the crowd.
In fact, if you can be accurate just 55% of the time, you’re in a position to make money since you’ll be right more often than not. If you size your bets properly and cut losses, a 55% batting average will produce above-average returns.
Even monkeys can join in the game. If you’re forecasting random binary outcomes (stocks up or down, rates high or low, etc.), a trained monkey will have a 50% batting average. The reason is that the monkey knows nothing and just points to a random result.
Random pointing with random outcomes over a sustained period will be ‘right’ half the time and ‘wrong’ half the time, amounting to a 50% forecasting record. You won’t make any money with that, but you won’t lose any either. It’s a push.
So, if 70% accuracy is uncanny, 55% accuracy is OK, and 50% accuracy is achieved by trained monkeys, how do actual professional forecasters do? The answer is less than 50%.
In short, professional forecasters are worse than trained monkeys at predicting markets.
Need proof? Every year, the Federal Reserve forecasts economic growth on a one-year forward basis. In 2010, they forecast 2011; in 2011, they forecast 2012; and so on. From 2009 to 2016, the Fed was wrong eight years in a row. When I say ‘wrong’, I mean by orders of magnitude.
Track record of getting wrong
If the Fed forecast 3.5% growth and actual growth was 3.3%, I would consider that to be awesome. But the Fed would forecast 3.5% growth and it would come in at 2.2%. That’s not even close considering that growth is confined to plus or minus 4% in the vast majority of years. And let’s not be too hard on the Fed. The IMF forecasts were just as bad.
For further evidence, have a look at Chart 1 below. It shows the implied path of Fed interest rate hikes from 2008 to 2021 based on Fed Fund futures contracts traded on the Chicago Mercantile Exchange.
This forecast is not from a specific institution. Instead, it represents the ‘wisdom of crowds’ or the distilled views of all market participants as aggregated by market prices.
The red line shows the actual path of interest rates over time. The black dotted lines show the expected path of interest rates based on Fed Fund futures contracts traded on the CME at various points in time.
As you can see, from 2009 to 2015, the market consistently expected higher rates than the Fed delivered. Those are the black dotted lines above the red line.
From 2016–18, the market consistently expected lower rates than the Fed delivered. Those are the black dotted lines below the red line.
Right now, the market seems to have it about right (the black dotted lines starting in 2018 and predicting higher rates), but we’ll see what happens.
My expectation is that the Fed is overtightening and will have to back off from rate hikes later this year. That means the red line will trend below the black dotted lines and the market will miss the mark again.
The Fed Fund futures contract is one of the most liquid and heavily traded contracts in the world. If any futures contract reflects ‘the wisdom of crowds,’ this is it.
What the results show is that ‘the wisdom of crowds’ does not have very high predictive value. It’s just as faulty as the professional forecasts from the Fed and IMF.
There are reasons for this. The wisdom of crowds is a highly misunderstood concept. It works well when the problem is simple and the answer is static but unknown.
The classic case is guessing how many jellybeans are in a large jar. In that situation, the average of 1,000 guesses actually will be better than a single ‘expert’ opinion. That works because the number of jellybeans never changes. There’s nothing dynamic about the problem.
Wrong models give wrong forecasts
When the answer is truly unknown and the problem is complex and dynamic, such as capital markets forecasting, the wisdom of crowds is subject to all of the same biases, herding, risk-aversion and other human quirks known through behavioural psychology.
This is important because when academics say ‘you can’t beat the market’, my answer is the market indicators are usually wrong. When talking heads say, ‘You can’t beat the wisdom of crowds,’ I just smile and explain what the wisdom of crowds actually does and does not mean.
By the way, this is one reason why markets missed Brexit and Trump. The professional forecasters simply misinterpreted what polls and betting odds were actually saying.
None of this means that polls, betting odds and futures contracts have no value. They do. But the value lies in understanding what they’re actually indicating and not resting on a naive and superficial understanding of the wisdom of crowds.
Does this mean that forecasting is impossible or that the experts are uninformed? Not at all. Highly accurate forecasting is possible.
The problem with the ‘experts’ is not that they’re dopes (they’re not) or they’re not trying hard (they are). The problem is that they use the wrong models.
The smartest person in the world working as hard as possible will always be wrong if you use the wrong model. That why the IMF, Fed, and the wisdom of crowds bat below average.
They’re simply using the wrong models.