A Trained Monkey Could Do Better

The first time I appeared on live financial television was August 15, 2007. It was a guest appearance on CNBC’s Squawk Box program at the early stages of the 2007-2008 financial crisis.

Of course, none of us knew at that time exactly how and when things would play out, but it was clear to me that a meltdown was coming; the same meltdown I had been warning the government and academics about since 2003.

I’ve done 1,000 live TV interviews since then, but that first one remains memorable. Carl Quintanilla conducted the interview with some participation from Becky Quick, both of whom could not have been more welcoming.

They and the studio crew made me feel right at home even though it was my first time in studio and my first time meeting them. Joe Kernan remained off-camera during my interview with his back turned reading the New York Post sports page, but that’s Joe. We had plenty of interaction in my many interviews over the years that followed.

When I was done, I was curious about how many guests CNBC interviewed over the course of a day. Being on live TV made me feel a bit special, but I wanted to know how special it was to be a guest. The answer was deflating and brought me right down to earth.

CNBC has about 120 guests on in a single day, day after day, year after year. Many of those guests are repeat performers, just as I became a repeat guest on CNBC during the course of the crisis. But, I was just one face in the midst of a thundering herd.

What were all of those guests doing with all of that airtime? Well, for the most part they were forecasting. They predicted stock prices, interest rates, economic growth, unemployment, commodity prices, exchange rates, you name it.

Financial TV is one big prediction engine and the audience seems to have an insatiable appetite for it. That’s natural. Humans and markets dislike uncertainty, and anyone who can shed some light on the future is bound to find an audience.

Which begs a question: How accurate are those predictions?

No one expects perfection or anything close to it. A forecaster who turns out to be accurate 70% of the time is way ahead of the crowd. In fact, if you can be accurate just 55% of the time, you’re in a position to make money since you’ll be right more than you’re wrong. If you size your bets properly and cut losses, a 55% batting average will produce above average returns.

Even monkeys can join in the game. If you’re forecasting random binary outcomes (stocks up or down, rates high or low, etc.), a trained monkey will have a 50% batting average. The reason is that the monkey knows nothing and just points to a random result.

Random pointing with random outcomes over a sustained period will be “right” half the time and “wrong” half the time, for a 50% forecasting record. You won’t make any money with that, but you won’t lose any either. It’s a push.

So, if 70% accuracy is uncanny, 55% accuracy is OK, and 50% accuracy is achieved by trained monkeys, how do actual professional forecasters do? The answer is less than 50%.

In short, professional forecasters are worse than trained monkeys at predicting markets.

Need proof? Every year, the Federal Reserve forecasts economic growth on a one-year forward basis. And it’s been wrong every year for the better part of a decade. When I say “wrong” I mean by orders of magnitude.

If the Fed forecast 3.5% growth and actual growth was 3.3%, I would consider that to be awesome.

But, the Fed would forecast 3.5% growth and it would come in at 2.2%. That’s not even close considering that growth is confined to plus or minus 4% in the vast majority of years.

If you have defective and obsolete models, you will produce incorrect analysis and bad policy every time. There’s no better example of this than the Federal Reserve.

The Fed uses equilibrium models to understand an economy that is not an equilibrium system; it’s a complex dynamic system. The Fed uses the Phillips curve to understand the relationship between unemployment and inflation when 50 years of data say there is no fixed relationship.

The Fed uses what’s called value-at-risk modeling based on normally distributed events when the evidence is clear that the degree distribution of risk events is a power curve, not a normal or bell curve.

As a result of these defective models, the Fed printed $3.5 trillion of new money beginning in 2008 to “stimulate” the economy only to produce the weakest recovery in history. Now, the cycle of monetary tightening has been ongoing in various forms for nearly six years.

Let’s not be too hard on the Fed. The IMF forecasts were just as bad. And the “the wisdom of crowds” can also be dramatically wrong.

It does not have very high predictive value. It’s just as faulty as the professional forecasts from the Fed and IMF.

There are reasons for this. The wisdom of crowds is a highly misunderstood concept. It works well when the problem is simple and the answer is static, but unknown.

The classic case is guessing how many jellybeans are in a large jar. In that situation, the average of 1,000 guesses actually will be better than a single “expert” opinion. That works because the number of jellybeans never changes. There’s nothing dynamic about the problem.

But, when the answer is truly unknown and the problem is complex and dynamic such as capital markets forecasting, then the wisdom of crowds is subject to all of the same biases, herding, risk aversion, and other human quirks known through behavioral psychology.

This is important because when academics say “you can’t beat the market,” my answer is the market indicators are usually wrong. When talking heads say, “you can’t beat the wisdom of crowds,” I just smile and explain what the wisdom of crowds actually does and does not mean.

By the way, this is one reason why markets missed Brexit and Trump. The professional forecasters simply misinterpreted what polls and betting odds were actually saying.

None of this means that polls, betting odds, and futures contracts have no value. They do. But, the value lies in understanding what they’re actually indicating and not resting on a naive and superficial understanding of the wisdom of crowds.

Does this mean that forecasting is impossible or that the experts are uninformed? Not at all. Highly accurate forecasting is possible.

The problem with the “experts” is not that they’re dopes (they’re not), or they’re not trying hard (they are). The problem is that they use the wrong models. The smartest person in the world working as hard as possible will always be wrong if you use the wrong model.

That why the IMF, Fed, and the wisdom of crowds bat below .500. They’re using the wrong models.

But here at Project Prophesy, I can confidently say I’ve got the right models, which I developed for the CIA working in collaboration with top applied mathematicians and physicists at places like the Los Alamos National Laboratory and the Applied Physics Laboratory.

It’s these models that let me accurately forecast events like Brexit and the election of Donald Trump, while all the mainstream analysts laughed in my face. It’s not that I’m any smarter than many of these people. It’s just that I use superior models that work in the real world, not in never-never land..

These models do not assume equilibrium systems and normally distributed risk like mainstream models. My models are based on complexity theory, Bayesian statistics, behavioral psychology and history. They produce much more accurate results than all of the alternatives.

This is the methodology behind my forecasts, which allows my readers access to actionable market recommendations they won’t find elsewhere.

Regards,

Jim Rickards

for The Daily Reckoning

The Daily Reckoning