Case Studies


Bullshit in the wild

Spotting bullshit in the wild it isn't something you have to let others do for you. To illustrate this, we've provided a set of case studies based upon examples of bullshit in the wild. We've spotted many of these ourselves; some have come via other channels. These cases aren't the most egregious examples out there, but each illustrates one or more of the principles and practices that we aim to teach in this course. We will be adding additional case studies on a regular basis.

Case studies

Basic

These case studies require only clear thinking and occasionally a bit of arithmetic. As such, they should be readily accessible to all of our readers.

  • Food stamp fraud. In this example drawn from a Fox News story, we demonstrate how Fermi estimation can cut through bullshit like a hot knife through butter.

  • Traffic improvements. Irrelevant facts can lead you to bullshit conclusions if you approach them with an inaccurate model of how the world works. And if those conclusions let you spin a trite story about terrible traffic and wasteful government expenditures, what better clickbait?

  • 99.9% caffeine-free. In one section of his book The Demon-Haunted World, Carl Sagan decried that way that advertisers try to dazzle us with irrelevant facts and figures. He was mostly concerned with drug advertising; in this case study we explore a more innocuous example.

  • Criminal machine learning. Machine learning algorithms are sometimes touted as generating results that unbiased and without prejudice. This is bullshit. We explore an example in which two authors claim to have an algorithm that can determine whether one is a criminal simply from an 80x80 facial image, and show that this algorithm is actually doing something very different.

  • Machine learning about sexual orientation? In this case study, we discuss a controversial paper that claims a deep neural network can predict sexual orientation from facial photographs. We illustrate how one can question the interpretation of results without delving into the details of the machine learning algorithm used to generate them.

  • America's best barbecue? A food website ranked the quality of barbecue in 75 US cities, based on average restaruant reviews on TripAdvisor. What could possibly go wrong?

Intermediate

These case studies introduce some basic concepts from statistics such as sample size and extrapolation. However, they do not require any technical statistical knowledge to follow and most readers should find these relatively accessible.

  • Track and field records as examples of senescence. We lead off our series of case studies by calling bullshit on a figure in one of our own publications. We explore how differences in sample sizes can create misleading patterns in data, and an example of how writing a simulation can be an effective method of calling bullshit.

  • A gender gap in 100 meter dash times. We examine a 2004 Nature paper predicting that women sprinters will outrun men by the mid-22nd century. In doing so, we see the danger of over-extrapolation, and we get to read a beautiful example of reductio ad absurdum as a means of calling bullshit.

  • Musicians and mortality. Here we consider what can go wrong as one goes from scholarly article to popular science piece to social media meme. We explore why a data graphic shared widely on social media gives a misleading impression, explain the issue of right-censoring, and discuss how its effects can be seen in the light of correlation analysis.

Advanced

These case studies make extensive use of calculus and/or mathematical statistics. They may be of interest to readers without a strong background in those areas, but they will be most accessible to readers who know some calculus and statistics.

  • NIH's Rule of 21 - Part 1. The NIH wanted to restrict the number of grants that a single investigator could hold, and tried to justify this policy using data visualizations purported to illustrate decreasing marginal returns of investment in a given lab. Here we show that their graphs fail to demonstrate this and explain where they went wrong.

  • NIH's Rule of 21 - Part 2. It gets worse. The entire enterprise of looking at researcher output as a function of funding received, and then attempting to optimize the allocation of funding based on these data, is fundamentally flawed. We explain why.