Background

In modern society face recognition technology (FRT) is a part of many people’s daily lives. One common application of FRT that many are familiar with is using Face ID to unlock a smartphone. FRT captures unique facial measurements for each person, making it an important tool for confirming someone’s identity. In 1996, FRT gained attention from the U.S. government due to its effectiveness in verifying identity compared to fingerprints. That year, the U.S. Department of Defense invested in the FERET project, developing a large-scale face database. Since then, FRT found its way into various government applications with law enforcement being a primary user. This technology, driven by artificial intelligence (AI), enables police officers to compare facial images from patrols (or collected photos and videos) with existing public records such as mugshots, jail booking records, and driver’s licenses as well as commercial databases developed by technology companies.

According to the U.S. Government Accountability Office’s report, over 40 federal agencies employed FRT with plans for further expansion into areas beyond law enforcement. For example, Customs and Border Protection uses FRT to cross-reference images of departing passengers at boarding gates and passengers awaiting entry to their visa or passport application photos to assist with verifying identities for national security purposes. The Transport Security Administration also employs FRT at domestic airports.

Despite the growing potential uses of FRT, concerns regarding its inaccuracy, potential abuse uses, and privacy violation led to movements aimed at banning government use of this technology. At the municipal level, following San Francisco’s lead, 16 other cities adopted bans on FRT for municipal uses between 2019 to 2021. In 2021, Virginia and Vermont banned FRT for law enforcement uses.Virginia later lifted the measure in 2022, citing public safety concerns.

There remains a general lack of regulation and oversight at the federal level. In response to increasing concerns of the public, major FRT developers including Amazon and Microsoft declared they would not sell their products to police until Congress passed federal legislation to regulate. Currently, Congress is considering the Facial Recognition and Biometric Technology Moratorium Act of 2023. If passed, this 2023 bill would prohibit the use of facial recognition and biometric technologies by federal entities, and only Congress could lift the ban. It would also halt federal funding for developing FRT surveillance systems.

The major topics in this field include the cost of inaccuracy on innocent individuals, suppression of speech, and online privacy concerns.

The Cost of Inaccuracy

Supporters of movements banning FRT argue that the risks of FRT’s inaccuracy outweigh its benefits when used by the government. False arrest cases, such as of Ms. Woodruff, Mr. Williams and 5 others, show how FRT’s inaccuracies disproportionately affect marginalized individuals. The NIST’s 2019 report highlighted racial bias, showing false positive matches for African-American and Asian faces were more common than for Caucasian faces, with African-American women particularly impacted. Research also found commercial FRT programs showed higher rates of falsely identifying darker-skinned women compared to white male faces as well as misidentifying non-cisgender individuals. 

In response, opponents of the ban contend that proper human oversight would resolve the inaccuracy issues over time. They argue against outright banning FRT as it can be an effective tool for maintaining public safety. Law enforcement, in particular, could use FRT for various purposes including facilitating investigative leads, identifying victims, examining forensic evidence, and verifying individuals being released from prison. Law enforcement’s use of FRT demonstrated successful outcomes, such as effectively identifying an armed robber, a rapist, and a mass shooter. According to Pew Research Center’s 2021 survey, the majority of U.S. adults (46%) believe police use of FRT would be a great idea for society, while 27% disagree and 27% are unsure. 

Speech Suppression 

Supporters of ban movements also argue that abusive uses of FRT by law enforcement for surveillance raises civil liberties issues. For instance, police could potentially use RFT to target activists, especially in a police reform protest, causing chilling effects that threaten free speech. The fear of having their faces captured at protests by the police would deter people from expressing their political voices. The chilling effects could intensify with the increase in police use of FRT, especially with body cameras. Legal scholars referred to FRT as “the most uniquely dangerous surveillance mechanism ever invented.” 

On the other hand, opponents of ban movements believe that banning FRT is not necessary to address potential abusive uses of the technology. They advocate for finding a compromise where law enforcement can use FRT with checks and balances. For example, Utah chose to limit law enforcement’s use of FRT with stringent approval requirements instead of banning the technology. Likewise, Massachusetts implemented a requirement for police to have a court order before comparing images with face photos and names in the databases of the Registry of Motor Vehicle, FBI, and state police. Also, with increasing crime rates and staffing shortages in law enforcement, cities and states recently started to partially repeal their FRT bans.

Online Privacy

Proponents of the ban argue that developing the technology in itself is a serious invasion of online privacy. They believe that FRT should not be in the hands of the government whose duty is to protect the people. The process of training FRT AI models requires a massive amount of face photos, many of which are available online. This technology caused FRT developers to scrape billions of people’s faces online without their knowledge or  permission. Commentators argue that this practice is a loss of privacy for the affected individuals, as they lose control of their sensitive information for something that could be used against them. 

However, some argue that banning privacy-invasive technologies may not be an effective solution. That is because the banning legislation “may quickly become irrelevant with the advent of a newer technology not covered by the law.” (See e.g., law review article at 396). To discourage FRT developers from secretly using people’s faces, passing strong biometrics privacy laws—like the Illinois Biometric Information Privacy Act (BIPA)—could be a more practical solution. Such laws would make it illegal to capture and use face prints and other biometrics without the subjects’ permission. Furthermore, copyright law—though seemingly unrelated—could help slow down unauthorized uses of face photos while waiting for effective FRT legislation to be passed.

Conclusion 

Both sides of the debate appear to agree on the need to regulate the government’s use of RFT. However, the disagreement lies in whether to ban or restrict. Proponents of the bans worry that compromises would decrease the likelihood that a ban will ever be passed. Meanwhile, critics of the ban argue that the “ban or nothing” approach is causing the lack of regulations needed to regulate government use of FRT.

Loading

Share this post

Give feedback on this brief: