New York City mayor Bill de Blasio had a major problem on his hands last month, one of his own making. He’d promised the city teachers’ union he would shut down the city’s enormous public school system, reopened not so long before, if the city’s “test positivity rate” hit 3 percent. And it had, so he shut the schools. Just 10 days later, with the rate even higher, at 3.9 percent, de Blasio reversed his decision, explaining to CNN that when the 3 percent cutoff that he had once vigorously defended was put into place, the city “did not have the information we have now.”

This move away from the test positivity metric—usually calculated as the number of Covid-19 tests that come out positive divided by the number of tests done—is significant. Since the start of the pandemic, most reporting on the spread of the disease has led with a simpler number: the number of diagnosed cases. This continues to be the top-line stat on major news sites’ Covid trackers, despite the fact that total case numbers have, at times, been quite misleading. Last spring, for example, tests were very scarce, and many cases went undetected. So policymakers came to rely on test positivity as an alternative. Thus the 3-percent threshold for school closures in New York City; or Connecticut’s directive that visitors from states where the test positivity is higher than 10 percent must self-quarantine. But this replacement metric has been misunderstood.

Think about it. Test positivity is not a direct measure of new infections appearing in a population. It’s a ratio, and a ratio increases in two ways: Either when the numerator (in this case, the number of positive tests) rises; or when the denominator (in this case the number of tests performed) falls. Since the number of tests varies from place to place and over time, the test positivity may go up or down even when there is no change at all in disease spread.

As a result, while test positivity may be more informative than raw case numbers, it brings along its own distortions. The ratio will vary with the availability of tests, who’s deciding to get tested, and whether they can get into test centers if they go. The numbers may differ in sub-populations, too. It was still just 0.3 percent in New York City’s schools, for example, when the city’s overall rate jumped to 3.9 percent. And some states offer free testing only to people with symptoms, a policy that is guaranteed to make the test positivity rate go higher.

One reason test positivity ratios came to be so widely used is that they showed up on the dashboard of Johns Hopkins University’s Coronavirus Resource Center early in the pandemic. But they were not meant to be used as a direct measure of coronavirus spread, says Jennifer Nuzzo, the lead epidemiologist at the center. Rather, the numbers were meant to show whether enough testing was being done. When she and her colleagues were first developing the website, they noticed that testing rates varied widely from one country to another. But countries that were doing well in handling the virus and had comprehensive surveillance in place showed test positivity rates between about 3 and 5 percent. “That prompted the realization that it made sense to track testing in that way,” she says. And practically speaking, they had very few other data points to consider at that stage.

Read all of our coronavirus coverage here.

Many epidemiologists see policymakers’ reliance on the test positivity ratio in similar terms, as an example of expediency: The number was around, so people started using it. The media, too, has picked up on test-positivity ratios and made them into screaming headlines. “Somewhere along the line, some wires got crossed,” says Michael Mina, a virologist and epidemiologist at the Harvard T.H. Chan School of Public Health. The ratio does not tell you, by itself, how widespread Covid infections have become in your community. “Test positivity is really not reflective of anything unless you know very well who is getting tested, and why,” Mina says.