The current state of rules for use of facial recognition technology is literally all over the map. Next month, the city council in Portland, Oregon will hold a public meeting about blocking use of the technology by private companies, as well as by the government. San Francisco, Oakland, Calfornia, and Somerville, Massachusetts, already have banned the use of facial recognition technology by city agencies; Seattle’s police stopped using it last year; and Detroit has said facial recognition can be used only in connection with investigation of violent crimes and home invasions (and not in real time).

State governments have their own rules too. In October, California joined New Hampshire and Oregon in prohibiting law enforcement from using facial recognition and other biometric tracking technology in body cameras. Illinois passed a law that permits individuals to sue over the collection and use of a range of biometric data, including fingerprints and retinal scans as well as facial recognition technology. Washington and Texas have laws similar to the one in Illinois, but don’t allow for private suits.

In other words, we’re headed for a major clash. The potential benefits of facial recognition, and biometric data generally, are just too great for governments and corporations to pass up. Existing bans of public-sector use that are based on its present, inaccurate, and discriminatory implementations likely won’t be sustainable long-term as the technology improves. At the same time, completely unfettered use of private biometric systems seems incompatible with American values. We’re not China, or at least not yet.

This situation is crying out for policy development: Government needs to act to determine where the lines of appropriate use should be drawn. This is not likely to happen on the federal level, though, anytime soon: Even as pressure from activists builds, Congress has so far been unable to pass even a basic federal online privacy law; this month’s House Oversight Committee hearing on facial recognition has just been punted to next year. (A proposed bipartisan bill to constrain the use of the technology by federal law enforcement officers would address just a sliver of the issues raised by the use of biometric identifiers.) That leaves the issues to be worked out in different ways in different places, as a patchwork of local laws. Tech and telecom companies often moan about just this sort of outcome, complaining that it makes compliance difficult and drives up production costs—but in this case, it’s a good thing.

When federal policy is absent, ham-handed, or hopelessly captured by industry, local governments can act as testing grounds for new ideas, providing proof that the status quo can change. This is not a new idea: As Supreme Court Justice Louis Brandeis wrote in 1932, a “state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.” That approach—of using local laws as laboratory trials—worked when it came to spreading the power grid across the country. States and localities led the way in making electricity a publicly governed utility. The same thing happened in health care: Former Massachusetts Governor Mitt Romney has said that “without Romneycare [in Massachusetts] we wouldn’t have had Obamacare.”

The patchwork can work for tech too. In October, the federal appeals court for the District of Columbia circuit issued a 186-page opinion allowing states to continue to impose their own “open internet” laws and executive orders in the absence of any federal regulation of high-speed internet access. As telecom commentator Harold Feld wrote, this gives the industry “significant incentive to stop fooling around and offer real concessions to get some sort of federal law on the books.” In other words, the patchwork is usefully painful for companies: The agony stimulates them to come to the table.