Preview Mode Links will not work in preview mode

Razib Khan's Unsupervised Learning


Sep 29, 2022

How do we know when to trust the experts? On January 23rd, 2020, Vox published a piece titled The evidence on travel bans for diseases like coronavirus is clear: They don’t work. Journalists are largely limited to reporting what experts tell them, and in this case, it seems Vox's experts misled them. By December 2020 The New York Times could reflect that “interviews with more than two dozen experts show the policy of unobstructed travel was never based on hard science. It was a political decision, recast as health advice, which emerged after a plague outbreak in India in the 1990s.” The coronavirus pandemic has highlighted for many that expertise and specialized knowledge are not so straightforward, and “trusting the science” isn’t always straightforward, and hasty decisions can have global consequences. More narrowly, the political scientist Philip Tetlock’s 2005 Expert Political Judgment: How Good is it? How can We Know? reported that the most confident pundits often prove the least accurate.

To get around the biases and limitations of individuals, there has been a recent vogue for “prediction markets” using distributed knowledge and baking “skin in the game” into the process. On this episode of Unsupervised Learning Richard Hanania joins Razib to discuss his think tank’s collaboration with UT Austin’s Salem Center for Policy and Manifold Markets on a forecasting tournamentWhat’s their goal? What are the limitations of these sorts of markets? Why do they not care about the contestants’ credentials? Razib pushes Hanania on the idea there is no expertise, and they discuss domains where the application of specialized knowledge has concrete consequences (civil engineering) as opposed to those where it does not (political and foreign policy forecasting).

Hanania also addresses his decision to leave Twitter after his latest banning.