You are here: Home / Articles / Industry Data / Bad Statistics Lead to Misinformation

Bad Statistics Lead to Misinformation

Sweep ’em out. That’s what ought to be done with research “findings” based on misguided analyses of inappropriate data. This is the stuff to which British statesman Benjamin Disraeli referred, famously citing “lies, damned lies, and statistics” to bemoan the willy-nilly use of numbers. Numbers can, and often are, used to “prove” just about any program or policy that anybody with an agenda wants to praise or discredit. It’s an ongoing problem, and the field of highway safety is no exception. A new report by former Institute president Brian O’Neill and statistician Sergey Kyrychenko points to multiple examples of how motor vehicle death rates have been misinterpreted. These examples serve as powerful warnings of how not to use data.

Trends in the death rates have been widely used to measure highway safety progress over time and to compare relative highway safety performance among countries. Politicians often express national goals in terms of targeted reductions in the motor vehicle death rate per mile driven. The problem is that this rate is influenced by numerous factors that have nothing at all to do with traffic safety policies. An example is the presumption that a decline in deaths per mile traveled indicates that traffic safety programs are working effectively and vice versa. In fact, the relationship between miles traveled and fatality risk is more complicated. The risk per mile is much lower on congested freeways, for example, than on uncongested ones. Such differences in risk because of congestion do affect death rates, but they’re unrelated to traffic safety policies.

So even though it may seem appropriate to measure changes in deaths per mile over time or across jurisdictions to gauge the success or failure of highway safety countermeasures, these rates are influenced by too many factors unrelated to the countermeasures.

It’s the same with deaths per registered vehicle and per population. Per-vehicle rates can be useful for short-term comparisons, but over time and from jurisdiction to jurisdiction the composition of vehicle fleets changes (see p.2). Per-capita rates are influenced by changing demographics including, for example, the proportions of teenage and other high-risk drivers.

Competent researchers don’t use broadbrush rates like these to evaluate specific traffic safety programs. They use datasets directly related to the programs -for example, death rates on specific roads to assess the effects of speed limit changes on those roads. Such evaluations can lead to useful insights about program effectiveness and help to guide policymakers.

“Just as often data are misused,” O’Neill says. “And whether they’re misused inadvertently or deceitfully, as Disraeli observed, to bolster a favored viewpoint, the result is the same. Policy can end up being misguided.”

Same data lead to opposite conclusions: A sure sign that data are being misused is when the same death rates are cited to “prove” opposite points of view. In 1999 the US Centers for Disease Control and Prevention cited the declining death rate on US roads from the 1970s through the 1990s to proclaim the success of the nation’s approach to reducing this public health problem. Meanwhile, Leonard Evans also tracked the declining US death rate over about the same time interval, comparing this trend to those in other countries. His main finding is that US policy has been a “dramatic failure” because the death rate in this country hasn’t declined as much as elsewhere.

So which is it? Have US traffic safety programs and policies succeeded or have they failed?

We don’t know from Evans or from the Centers for Disease Control because neither one of them took differences in factors such as urbanization and demographics into account when comparing death rates over time or across jurisdictions,” O’Neill says.

How data are misused to justify speeding: Organized in 1982 to oppose the 55 mph speed limit, the National Motorists Association still opposes reasonable speed limits. To make its case, this group misuses motor vehicle death rates to try to make it seem as if safety is unrelated to speed limits and travel speeds. According to a 2005 news release, “the fatality rate has continued to decline despite higher speed limits and higher driving speeds. This clearly demonstrated that the 22-yearlong experiment with an arbitrary national speed limit served no positive purpose.”

What’s overlooked is that per-mile death rates across all kinds of US roads -rural and urban ones, interstate highways and city streets, etc. - are too broad to assess the effects of a specific policy change like raising speed limits on specific roads.

Study after study confirms that deaths on rural interstates go up when speed limits are raised (see Status Report, Nov. 22, 2003; on the web at The National Motorists Association furthers its agenda by ignoring these findings of scientific studies in favor of misusing the irrelevant per-mile death rate.

Misuse of death rates in SUN countries: Another example involves the SUNflower report, a comparison of road safety policies in Sweden, the United Kingdom, and the Netherlands. These policies were studied because the three countries reportedly have the lowest death rates in the world, and the authors of the SUNflower report assumed this was because of the effectiveness of the safety policies.

However, O’Neill and Kyrychenko point out that the authors of the SUNflower report didn’t consider whether other countries with higher death rates might have equal or better traffic safety programs but worse demographics, less crowded roads, or other factors that can lead to higher death rates despite good safety policies and programs.

Misuse of state-by-state data: Four US jurisdictions (Connecticut, Massachusetts, New Hampshire, and Vermont) had lower mileage death rates than the SUN countries during the period of study. But this was largely because of urbanization and demographics in the New England states, not because they have especially good safety programs and policies.

Differences in safety policies vary widely among US states - just as widely as among EU countries. But while nobody tries to lump together the death rates in the EU countries for comparison with elsewhere, this does happen in the case of US state death rates. They’re frequently lumped together into an overall rate for comparison with rates in other countries.

Except for a brief time in the late 1960s and early 1970s, the US government hasn’t been authorized to influence traffic safety programs aimed at drivers - belt laws, motorcycle helmet laws, speed limits, etc. (see Status Report, Dec. 7, 2002; on the web at These programs, established by state legislators, vary widely from state to state. Largely because of differences among belt laws, for example, use rates vary from about 50 percent in some jurisdictions to more than 90 percent in others.

Comparing data as broad-brush as permile death rates across states obscures the effects of these differing programs and policies. For example, New Hampshire has the fourth lowest per-mile death rate among the 50 states. Does this mean its programs and polices are better or more effective than those in other states? No. In fact, New Hampshire is the only US state without a belt use law. Its buckle-up rate is much lower than in other states. Nor does New Hampshire have a motorcycle helmet law. Its per-mile death rate is low largely because of factors related to urbanization and demographics, not because of its safety policies.

O’Neill and Kyrychenko conducted statistical exercises, including regression analyses, to explore the effects of factors related to urbanization, demographics, and climate on death rates in New Hampshire and the other 49 states. The main finding is that the first two factors strongly influence state death rates. Climate differences also are influential, though not as much.

The very rural state of Montana, for example, has the highest per-mile death rate among the 50 states. What happens when its rate is standardized by urban versus rural mileage to match the US as a whole? Then Montana drops to 27th among the states in terms of its death rate per mile traveled. States with the highest per-mile rates also have the lowest median incomes, percentages of population with college degrees, and school spending per pupil. They have the highest proportions of high-risk drivers, those 16-20 years old. States with high population densities and traffic congestion have low per-mile death rates. In fact, almost 70 percent of the variability among passenger vehicle occupant death rates can be explained by urbanization and demographics.

How to determine true policy effects: Factors unrelated to traffic safety policies can overwhelm the effects that might be accruing from specific programs. This doesn’t mean the programs aren’t worthwhile. There’s no question about whether a safety belt law would have saved lives over the years in New Hampshire. It would have. Lives also would have been saved if New Hampshire had a law requiring motorcyclists to wear helmets. But the effectiveness of specific traffic safety policies like belt and helmet use laws cannot be meaningfully evaluated by simply comparing overall state death rates.

Instead the evaluations have to start with relevant measures of program outcome - changes in motorcyclist death rates to evaluate the effects of helmet laws, for example. Then the evaluations have to account for factors such as climate and economic conditions that might be affecting the rates. Once these are accounted for, the program effects, if there are any, won’t be obscured. Then and only then can the findings be deemed meaningful enough to guide policymaking.

This is what Disraeli would have advised. O’Neill and Kyrychenko advise it too.

For a copy of “Use and misuse of motor vehicle crash death rates in assessing highway safety performance” by B. O’Neill and S. Kyrychenko write: Publications, Insurance Institute for Highway Safety, 1005 North Glebe Road, Arlington, VA 22201, or