You are here: Home / Articles / Safety / Traffic / Flawed Analysis Of Red Light Camera Program Draws Institute Critique

Flawed Analysis Of Red Light Camera Program Draws Institute Critique

Editor’s note: On October 4, 2005, The Washington Post published a review of the District of Columbia’s red light camera program by reporters Del Quentin Wilber and Derek Willis. The gist was that the cameras haven’t reduced crashes. Institute researchers reviewed the reporters’ analyses, finding fundamental flaws, and communicated the following critique to The Post on October 7.

The most obvious flaw is in the data that were used, which appear to show an almost 50 percent citywide increase in all crashes from 1999 to 2000. Such an increase is out of line with other years in the dataset and cannot be explained by any obvious factors such as an increase in traffic. Inquiring about this, the Institute learned from the D.C. Police Department’s Inspector Patrick Burke that a change in the way crash statistics were reported and recorded was instigated between 1999 and 2000. Burke said he informed the reporters of this and cautioned them not to use the data for before-and-after comparisons. Yet the reporters used the invalid dataset.

Wilber and Willis reported that “broadside crashes, also known as rightangle or T-bone collisions, rose 30 percent” at camera intersections from 1998 to 2004. This should have raised an instant red flag because (1) it is illogical on its face, and (2) the finding is so far out of line with numerous findings published in peer-reviewed scientific journals. There is absolutely no reason to believe that cameras cause right-angle crashes to increase; the worst that could be expected is that cameras would fail to reduce crashes. Not even the most vociferous opponents of red light cameras claim they increase right-angle impacts.

Wilber and Willis should have been more skeptical. They should have dug deeper into the data. Scientists do this routinely, especially when they come up with findings that are out of line with other scientific research. Before reaching apparently contradictory findings, good researchers go back to the datasets they are using to try to understand why the apparent findings are different from prior research. When reporters conduct their own analyses, they should apply the same rigor.

Wilber and Willis should have first reviewed existing research, which among other findings indicates that cameras reduce red light running and crashes at all intersections in a community, not just those with cameras; this is referred to as a spillover effect. Yet in their analysis Wilber and Willis compared crashes at D.C. intersections with and without cameras to assess the effectiveness of the cameras. Thus, the analysis lacks the very first requirement for estimating effects -- a reasonable expectation of what would have happened without the cameras.

Lives are at stake, and The Post needs to take more care before reporting inaccuracies that could mean more traffic deaths.