Monkey Cage: Is Trump hurting Republicans down-ballot?


At the Washington Post‘s Monkey Cage blog, John Sides reviews the Good Judgment Open consensus on several questions about the GOP’s chances of success in the House and Senate.

Yes, the odds are against the Republicans’ retaining control of the Senate, though they will likely continue to control the House. But it’s still not clear that Trump is (yet) hurting Republicans running for Congress.

The GJ Open consensus predicts that Republicans are likely to lose control of the Senate to Democrats, with only a 38% probability of maintaining control, but are much more likely to retain control of the House (85%). Moreover, these probabilities have not changed much over the last few months, despite rumblings that Trump’s increasingly unconventional candidacy might hurt Republicans in House and Senate races.

Of course, this does not mean that no Republican candidates are in trouble (and perhaps partly due to Trump):


Of the three Senate races currently being forecast, two Republican Senators — Pat Toomey of Pennsylvania and especially Ron Johnson of Wisconsin — are in trouble. In both cases, their estimated chances of winning have trended down at least a little bit. Gov. Pat McCrory of North Carolina has also seen his estimated chance of winning slip below 50 percent.

On the other hand, Sen. Rob Portman of Ohio appears to be in a better position now: His estimated chances of winning have increased over time. This mirrors the trend in his polls, where he currently leads by five points. Of course, the forecast — a 65 percent chance that he will win — suggests this five-point lead isn’t certain to last.

Agree or disagree? Join the Monkey Cage US Election Challenge to make your own forecast on these and many other questions.

John Sides, August 24, 2016, Monkey Cage

Win a signed copy of Superforecasting!


As we did in July, we’re giving away three copies of Superforecasting: The Art & Science of Prediction signed by Philip E. Tetlock to anyone who forecasts on Good Judgment Open in September.

To enter, simply join GJ Open and make a forecast in the month of September. In the first week of October, we’ll randomly choose three forecasters to receive a signed copy of the book and contact the winners by email.

Congratulations to BillionthChimpadi, and ram1317, who each won a signed copy of the book for forecasting in July!

Results from the GJ Open Feedback Survey

On July 5th, we sent a short feedback survey to over 26,000 forecasters who had joined Good Judgment Open since the site opened to the public in September 2015, hoping to learn more about how our forecasters use GJ Open and what they hope to gain from participating in a massively open online forecasting site. Since then, we’ve heard from over 700 of you (or about 3% of all forecasters) about why you joined and what you’d like us to work on improving.

By and large, you’ve told us that you joined GJ Open to become a better forecaster, and that you want more feedback and tools to help you do so.

When asked Why did you sign up for GJ Open?, over 72% of respondents rated the answer to find out how good I am at forecasting as “important” or “very important” and over 66% rated the answer “to get feedback and become a better forecaster” as important or very important. You also indicated that you joined to learn about new topics (over 57% rated important or very important) and to follow the news on current events (50%), but placed less emphasis on interacting with other forecasters (29%) and sharing your opinion on current events (25%).


These priorities were reflected in the features you asked for. When asked to rank the importance of four types of features we’re considering developing, you indicated that you would most like to see detailed feedback about your forecasting accuracy and to receive training on how to become a better forecaster, as over 60% of you ranked those two options above features that help you follow the news and learn about new topics and social features that let you interact with other forecasters.


The crowd on Good Judgment Open is incredibly diverse, as are your reasons for forecasting. But the results of this survey tell us that most of you see the site not just as a fun game or a chance to interact with other people, but specifically as an exercise in judgment and as a way to improve your skills so you can apply them to other areas of your life. Given these results, our data science team will be working closely with Cultivate Labs, our partners in developing the current Good Judgment Open forecasting platform, to prioritize the development of new features that provide you with better and more detailed feedback on your forecasting accuracy and that train you to become a better forecaster based on the science of forecasting. And we have hundreds of thoughtful and detailed comments from the survey on specific improvements we can make to achieve those goals.

Of course, we recognize that not all of our forecasters joined GJ Open because they see it as an educational opportunity. We believe that forecasting can be fun and social (in fact, the ninth commandment of Superforecasting, “Strive to bring out the best in others and let others bring out the best in you,” explicitly encourages interaction as a path to improvement) – so we will continue to develop features that facilitate social interaction and to offer challenges that cover fun topics.

The feedback you’ve provided shows us that many of you care about GJ Open just as much as we do, and we thank you for putting in the time and effort to help us make it even better. For those of you who have yet to fill out the survey, we’re still monitoring the results, and you can complete it here.

– The Good Judgment Open Team

The Economist’s World in 2016 Challenge Wraps with Both Prescience and Surprises

Eleven months after the launch of Good Judgment Open, our first challenge has come to a close. The Economist’s World in 2016 Challenge officially closed on August 1, 2016, a few weeks after the nomination of Donald Trump as the presidential candidate for the Republican party.

TW2016In total, 6,485 forecasters joined the challenge to forecast twenty-five questions about the the world in 2016. Along the way, GJO forecasters were surprised by many impactful events, such as the unexpected victory of the Leave campaign in Britain’s referendum on membership in the European Union, the nomination of Trump, and the election of Rodrigo Duterte as the President of the Philippines. Forecasters proved their prescience on many other topics, accurately predicting GDP growth of the BRIC countries, that Dilma Rousseff would survive impeachment until April 1, and that Germany would not announce limits on its acceptance of asylum seekers or refugees before July 1.

Congratulations to alistaircookie, who won the challenge by forecasting on 23 out of the 25 possible questions, ending up near the top of many question leaderboards and proving to be the most accurate in forecasting Trump’s rise to the top of the Republican ticket. Alistaircookie won the challenge with an average Brier score of 0.176 and a cumulative Accuracy Score of -2.074, beating the next best forecaster’s Accuracy score by -.645. (You can review how we score the accuracy of forecasts here.)

For those who are interested in continuing to forecast the geopolitical future of the world in 2016 and beyond, check out the GJP Classic Geopolitical Challenge and keep your eyes open for the launch of some new and exciting challenges in the coming months.

Reuters story endorses Superforecasting methods

Will OPEC agree to freeze output in Sept?
…”The most successful forecasters start by trying to define a base rate chance of something occurring and then adjust it up or down in the light of evidence about the specific circumstances in a particular case.

They begin with an ‘outside view’ and then proceed to adjust it with an ‘inside view’ based on the specifics of the case (“Superforecasting: the art and science of prediction”, Tetlock and Gardner, 2015)”…

John Kemp, August 16, 2016, Reuters

Disruptions from Vehicle Innovations

Earlier this year, The Good Judgment Open and the Mack Institute for Innovation Management in the Wharton School at the University of Pennsylvania launched the Electric Vehicle (EV) Challenge. Its goal: to investigate indicators of whether public interest in (and adoption of) electric vehicles would reach a “tipping point” speeding the rate of diffusion beyond the slow pace and obstacle-filled efforts of the past two decades.

But EV’s are not the only disruptive technology affecting vehicles. As in many other industries, advanced automation promises to alter fundamentally the relationship between human beings and the machines they depend upon. For automobiles, the technology for “self-driving” or “autonomous” vehicles has advanced rapidly, spurred not only by new features offered by the world’s largest automakers but also by innovations introduced by newer automakers like Google and Tesla.

However, just as the barriers to reaching an EV “tipping point” are not exclusively technical, autonomous vehicles face challenges of public acceptance and regulatory approval, particularly for the most advanced implementation with virtually no human control.
Because these two disruptive trends may well intersect more and more in the future, we decided to expand the EV Tipping Point to include this larger set of “disruptive vehicle innovations”, and in alignment with our expanded focus, we have renamed the challenge Disruptions from Vehicle Innovations.

Self-Driving Car and Electric Car (002)

Continue reading

Monkey Cage: The odds for third-party success this year are getting better and better


At the Washington Post‘s Monkey Cage blog, John Sides remarks on our Good Judgment Open question about the popular vote share for an independent or third-party candidate in the upcoming US presidential election. According to the GJ Open consensus, the chances that a candidate like Gary Johnson will win at least 5% of the popular vote have continued to rise to over 50%:


There is now slightly better than a 50-50 chance that a third party could get at least 5 percent of the popular vote. That’s striking.

Check out the current consensus and sign up to make your own forecast.

John Sides, August 10, 2016, Monkey Cage

Crowd Forecasters Make Early Calls on Zika-related GJ Open Questions

Olympians are taking extra, sometimes creative, precautions against mosquito bites and Zika in Rio, but so far, concern over the illness has not overwhelmingly influenced the Games.

However, a few months ago, fearCDC Levelss of the epidemic and how they would influence travel for the Games, created a tense, worrisome environment. It was at that time when we asked, in February 2016, whether “the Centers for Disease Control would elevate their travel guidance for Brazil to Warning Level 3 due to the Zika virus before the Olympics begin.” The following day after posting the question, the Washington Post published an article entitled: “What’s really scary about the Zika virus are the things that we don’t know.”

Despite the edgy environment, from nearly the beginning of Good Judgment Open forecasters’ registered predictions on the website, the 454 forecasters estimated with significant probability that the travel guidance would not be elevated to Warning Level 3.

Continue reading

Susan Pinker in The Globe and Mail on “In a world filled with surprises, can we predict the future?”

“Given the competitive streak that typified superforecasters, I was surprised to learn that forecasters working in teams beat the solo predictors by a long shot. But it was not only the working conditions that allowed predictors to thrive. More than anything else, it was the mindset. The ‘supers’ had a willingness to update their beliefs constantly as new data rolled in. That openness was the strongest ingredient in accurate predictions – which makes these superforecasters not like pundits at all.”

Susan Pinker, The Globe and Mail, July 16, 2016

Justin Burke in The Australian on “Superior forecasters draw their successes from ashes of failure”

“The book continues to turn up in all sorts of interesting places… Recently, Harvard historian and public intellectual Niall Ferguson cited Tetlock’s work to a Sydney Opera House audience, in response to a question from the audience about the accuracy of charismatic pundits in the media. Ferguson went on to say: ‘I have established a practice of ­assessing every prediction that I make, what I’ve learned; one must be extremely rigorous about what one got wrong … I’ve become much more formal about it’.”

Justin Burke, The Australian, July 9, 2016

John Authers in the FT on “Brexit shows no greater loser than political and market experts”

“[I]n political forecasting, we need to be humble. For investors, that means being balanced and hedged, and not approaching an important, unpredictable event as though it is a certainty. It does not mean abandoning prediction markets. ‘We need to be patient,’ says Dr Tetlock, ‘and not toss out our best forecasting systems every time that happens’.”

John Authers, Financial Times, July 1, 2016

Superforecasting Brexit:
Were Most Forecasters Wrong or Just Unlucky?

by Nick Rohrbaugh and Warren Hatch

The United Kingdom voted to Leave the European Union, sending shockwaves through world political capitals and financial centers. Most everyone expected a close vote, but few anticipated that Britain would vote to Leave.

Did the political and economic elites miscalculate the likelihood of a Leave vote? If so, they had good company. Most betting markets, as well as Good Judgment’s Superforecasters and participants on the GJ Open public forecasting site, closed with odds that favored a victory for the Remain camp.

Unless a forecaster assigns a 0% chance to an outcome that does in fact occur or 100% to an event that never happens, it’s impossible to judge the accuracy of a single forecast as being  definitively “wrong.” How, then, can we evaluate whether the elites and other forecasters were wrong or just unlucky? Continue reading

Monkey Cage: Is Trump threatening the GOP’s Senate majority? Not yet.

At the Washington Post‘s Monkey Cage blog, John Sides highlights the fact that despite Donald Trump’s ascension to the presumptive nominee of the GOP, forecasters on Good Judgment Open have not changed their minds on the chances that Republicans lose their majority in the Senate.GJ Open Senate Majority

To be sure, there is some research suggesting that presidential candidates do have “coattails” in Senate races. The better a party’s presidential candidate does, the better that party’s Senate candidates do. We may therefore see the Democrats’ chances in the Senate increase if Trump’s chances in November decrease. But it hasn’t happened yet.

You can view the current consensus and sign up to make your own forecast on Good Judgment Open.

John Sides, June 15 2016, Monkey Cage

Jason Zweig inteviews Phil Tetlock on “The Perilous Task of Forecasting”

“You should expect forecasters to do better to the degree they’re working in a world where they get quick, clear feedback on their forecasts. ‘Distinct possibility’ doesn’t count. You have to be making numerical probability estimates repeatedly over time on a wide range of outcomes. If you do that, you can learn to become one of the better-calibrated professionals.”

Jason Zweig, The Wall Street Journal, June 17, 2016

fin24 on “How to become a Superforecaster”

“A key point is what Superforecasters do, not what they are. ‘Foresight isn’t a mysterious gift bestowed at birth. It is the product of particular ways of thinking, of gathering information, of updating beliefs. These habits of thought can be learned and cultivated by any intelligent, thoughtful, determined person,’ Tetlock concluded.”

fin24, Ian Mann, June 14, 2016