“Given the competitive streak that typified superforecasters, I was surprised to learn that forecasters working in teams beat the solo predictors by a long shot. But it was not only the working conditions that allowed predictors to thrive. More than anything else, it was the mindset. The ‘supers’ had a willingness to update their beliefs constantly as new data rolled in. That openness was the strongest ingredient in accurate predictions – which makes these superforecasters not like pundits at all.”
Susan Pinker, The Globe and Mail, July 16, 2016
“The book continues to turn up in all sorts of interesting places… Recently, Harvard historian and public intellectual Niall Ferguson cited Tetlock’s work to a Sydney Opera House audience, in response to a question from the audience about the accuracy of charismatic pundits in the media. Ferguson went on to say: ‘I have established a practice of assessing every prediction that I make, what I’ve learned; one must be extremely rigorous about what one got wrong … I’ve become much more formal about it’.”
Justin Burke, The Australian, July 9, 2016
“[I]n political forecasting, we need to be humble. For investors, that means being balanced and hedged, and not approaching an important, unpredictable event as though it is a certainty. It does not mean abandoning prediction markets. ‘We need to be patient,’ says Dr Tetlock, ‘and not toss out our best forecasting systems every time that happens’.”
John Authers, Financial Times, July 1, 2016
by Nick Rohrbaugh and Warren Hatch
The United Kingdom voted to Leave the European Union, sending shockwaves through world political capitals and financial centers. Most everyone expected a close vote, but few anticipated that Britain would vote to Leave.
Did the political and economic elites miscalculate the likelihood of a Leave vote? If so, they had good company. Most betting markets, as well as Good Judgment’s Superforecasters and participants on the GJ Open public forecasting site, closed with odds that favored a victory for the Remain camp.
Unless a forecaster assigns a 0% chance to an outcome that does in fact occur or 100% to an event that never happens, it’s impossible to judge the accuracy of a single forecast as being definitively “wrong.” How, then, can we evaluate whether the elites and other forecasters were wrong or just unlucky? Continue reading
At the Washington Post‘s Monkey Cage blog, John Sides highlights the fact that despite Donald Trump’s ascension to the presumptive nominee of the GOP, forecasters on Good Judgment Open have not changed their minds on the chances that Republicans lose their majority in the Senate.
To be sure, there is some research suggesting that presidential candidates do have “coattails” in Senate races. The better a party’s presidential candidate does, the better that party’s Senate candidates do. We may therefore see the Democrats’ chances in the Senate increase if Trump’s chances in November decrease. But it hasn’t happened yet.
You can view the current consensus and sign up to make your own forecast on Good Judgment Open.
John Sides, June 15 2016, Monkey Cage
“You should expect forecasters to do better to the degree they’re working in a world where they get quick, clear feedback on their forecasts. ‘Distinct possibility’ doesn’t count. You have to be making numerical probability estimates repeatedly over time on a wide range of outcomes. If you do that, you can learn to become one of the better-calibrated professionals.”
Jason Zweig, The Wall Street Journal, June 17, 2016
“A key point is what Superforecasters do, not what they are. ‘Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person,’ Tetlock concluded.”
fin24, Ian Mann, June 14, 2016
The Washington Post‘s Monkey Cage blog highlighted a trend on Good Judgment Open that shows the chances of a third party candidate winning at least 5% of the popular vote in the 2016 US presidential election are rising:
In the past three weeks, the estimated chance went up from close to 0 percent to 25 percent. This is to say, the forecasters see a stronger possibility that a third party or independent candidate could reach a relatively rare threshold in the popular vote. “Stronger possibility” does not mean “very likely,” of course, because the current odds are only 1-in-4. Nevertheless, the trend is noteworthy. It could reflect the ongoing debate within the Republican Party about Donald Trump, the fact that the Libertarian Party nominated two unusually qualified candidates or other factors.
Check out the current consensus and sign up to make your own forecast.
John Sides, June 14, 2016, Monkey Cage
“[T]he GJP employed psychologists to follow the supers’ progress to see what average folks like us could learn from them. They found that people who were good at forecasting were fairly intelligent, but not Mensa candidates. Essentially, they possessed a healthy amount of cynicism that led them to ask the right questions and weigh up data fairly, and an open mindedness that allowed them to change their minds easily when the facts seemed to be contradicting their forecast.”
Melissa York, May 26, 2016, City A.M.
“Goldman Sachs said it’s “near-term Fed call” will now offer odds or probabilities… ‘This change will allow us to respond to new information about the economy or the Fed’s views in a manner that is nuanced but nevertheless clear, consistent with best practice as identified in the 2015 book Superforecasting by Philip Tetlock and Dan Gardner,’ the note said.”
Jeffrey Bartash, May 23, 2016, MarketWatch
“Tetlock describes the ways that superforecasters think, and it is reasonable to believe that few activists think the same way. Many activists are driven by a sense of outrage over injustice or by a feeling of duty to take a stand. Also, they need to believe their efforts will make a difference. These beliefs and emotions are not conducive to the calm, rational, probabilistic approach used by superforecasters.
“Nevertheless, it is possible to become better at forecasting. Few people have trained systematically at it. Now that the skills are better understood and there are ways of obtaining feedback, it should be possible for many more people to realistically aspire to become superforecasters. Some activists might want to do this themselves.”
Brian Martin, May 23, 1016, Waging NonViolence
“The best forecasters are all curious, humble, self-critical, give weight to multiple perspectives and feel free to change their minds often… But as Tim Richards has argued, we are both by design and by culture inclined to be anything but humble in our approach to investing. We invest with a certainty that we’ve picked winners and sell in the certainty that we can reinvest our capital to make more money elsewhere. We are usually wrong, often spectacularly wrong. These tendencies come from hardwired biases and also from emotional responses to our circumstances. But they also arise out of cultural requirements to show ourselves to be confident and decisive. Even though we should, we rarely reward those who show caution and humility in the face of uncertainty.”
Bob Seawright, May 23, 2016, ThinkAdvisor
“In the age of information overload, the active investor’s edge increasingly lies in knowing what information truly matters and how to process that information. If you can identify skill — whether you are looking to hire a portfolio manager or you are a portfolio manager aspiring to improve — we believe that this superforecasting framework can give you a better shot at beating the market.”
Sammy Suzuki, May 9, 2016, Institutional Investor
“Ideally, a bet would use a question as big as the debate it means to settle. But that will not work, because big questions – “Will population growth outstrip resources and threaten civilization?” – do not produce easily measurable outcomes. The key, instead, is to ask many small, precise questions…. This approach, using question clusters, could be applied to virtually any important debate. Right now, for example we are putting the hawks-versus-doves argument about the Iran nuclear deal to the forecasting test.
“Naturally, using many questions could result in split decisions. But if our goal is to learn, that is a feature, not a bug. A split decision would suggest that neither bettor’s understanding of reality is perfectly accurate and that the truth lies somewhere between. That would be an enlightening result particularly when public debates are dominated by extreme positions.”
Phil Tetlock and Dan Gardner, May 11, 2016, Project Syndicate
Enthusiasm for electric vehicles has historically followed a boom and bust cycle. With the emergence of electric vehicle startups like Tesla, the introduction of global electric models like Nissan Leaf, declining battery costs, government subsidies, and public/private infrastructure investments, it is a good time to ask: Are we on the cusp of an electric vehicle “tipping point” ? Continue reading