In January, when New York’s Public Oversight of Surveillance Technology Act went into effect, the City of New York Police Department was suddenly forced to detail the tools it had long kept from public view. But instead of giving New Yorkers transparency, the NYPD gave error-filled, boilerplate statements that hide almost everything of value. Almost none of the policies list specific vendors, surveillance tool models, or information-sharing practices. The department’s facial recognition policy says it can share data “pursuant to on-going criminal investigations, civil litigation, and disciplinary proceedings,” a standard so broad it’s largely meaningless.

This marks the greatest test yet of Community Control of Police Surveillance (CCOPS), a growing effort to ensure that the public can take back control over the decisions of how communities are surveilled, deciding whether tools like facial recognition, drones, and predictive policing are acceptable for their neighborhoods. The battle playing out in New York City—over not just what tech police are permitted to use but how they use it, how that use is overseen, and how it’s disclosed—holds broad lessons on the future of surveillance. As more cities and municipalities around the country implement policies on surveillance technologies like facial recognition, and as more citizens push for CCOPS in their own communities, the challenges and shortcomings faced in New York City show that transparency requirements on paper only matter when the public forces police to abide.

Surveillance technologies already widely used by police departments across the country often make surveillance lower cost, faster, and passive. Take facial recognition: When run on video cameras in public squares, it can monitor faces constantly through an algorithm (i.e., cheaper and faster), from afar and in passing (e.g., not requiring any kind of physical search), and even outside the bounds of traditional Fourth Amendment warrant processes. Other examples abound: drones used to fly over protest crowds; police cars equipped with automatic license plate readers that scan and centrally store license plates as law enforcement vehicles drive down streets or through parking lots. Algorithms are all the while used across the criminal justice system, from police precincts “predicting” crime to bail hearings to the sentencing bench.

Despite examples like that of the NYPD, there have been numerous CCOPS success stories. The earliest adopter of the CCOPS model was Oakland, California, where generations of advocacy against police violence, primarily by Black and Latinx advocates, culminated in 2015 with the creation of the Oakland Privacy Commission. Oakland wasn’t just the first but the strongest CCOPS ordinance, granting the Privacy Commission independence and the full power to approve or ban police surveillance tools. Since its creation, the Privacy Commission has repeatedly questioned department officials, restricted the use of drones, fully banned predictive policing and biometric surveillance software, and most recently voted to recommend Oakland police stop using automatic license plate readers.

Across the bay, San Francisco followed suit with its own CCOPS law in 2019. While it didn’t go so far as to create an independent commission, it empowered the city’s legislature to approve or ban police surveillance tools. Notably, the bill also included a ban on government use of facial recognition, the first in the country. Numerous cities have done the same in the months since, banning targeted technologies like facial recognition, or improving overall accountability. Four jurisdictions have also banned police from signing nondisclosure agreements with surveillance vendors, taking away a common excuse for police opacity. Other success stories include that of San Diego, whose city council passed a surveillance-governing ordinance at the end of 2020 after backlash over a police “smart streetlight” program.

None of these decisions appeared out of thin air; a confluence of community activism, media reporting, attention from local politicians, and other factors made these ideas for surveillance reform into reality. New York City is currently running into a number of challenges with its own surveillance oversight that highlights this need for constant work, for making surveillance oversight not just about transparency on paper but also compelling and enforcing changes in police practice.

Per the Public Oversight of Surveillance Technology Act, the NYPD published an initial list of deployed surveillance technologies that includes audio recording devices, cell-site simulators, license plate readers, and facial and iris recognition. The public has until February 25 to submit comments in response. But issues plague these newly required disclosures—because adequate democratic oversight of these surveillance technologies is not achieved merely by knowing they exist. The department’s published documentation on facial recognition contains the same copied-and-pasted assurances as every other policy, claiming that said tools will be used only for legitimate law enforcement purposes.