Advertisement

Facebook to shut down face-recognition system, delete data

In this May 16, 2012, file photo, the Facebook logo is displayed on an iPad in Philadelphia. (AP Photo/Matt Rourke, File)

PROVIDENCE, R.I. (AP) — Facebook said it will shut down its face-recognition system and delete the faceprints of more than 1 billion people amid growing concerns about the technology and its misuse by governments, police and others.

“This change will represent one of the largest shifts in facial recognition usage in the technology’s history,” Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta, wrote in a blog post on Tuesday.

He said the company was trying to weigh the positive use cases for the technology “against growing societal concerns, especially as regulators have yet to provide clear rules.” The company in the coming weeks will delete “more than a billion people’s individual facial recognition templates,” he said.

Facebook’s about-face follows a busy few weeks. On Thursday it announced its new name Meta for Facebook the company, but not the social network. The change, it said, will help it focus on building technology for what it envisions as the next iteration of the internet -- the “metaverse.”

The company is also facing perhaps its biggest public relations crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the harms its products cause and often did little or nothing to mitigate them.

More than a third of Facebook’s daily active users have opted in to have their faces recognized by the social network’s system. That’s about 640 million people. Facebook introduced facial recognition more than a decade ago but gradually made it easier to opt out of the feature as it faced scrutiny from courts and regulators.

Facebook in 2019 stopped automatically recognizing people in photos and suggesting people “tag” them, and instead of making that the default, asked users to choose if they wanted to use its facial recognition feature.

Facebook’s decision to shut down its system “is a good example of trying to make product decisions that are good for the user and the company,” said Kristen Martin, a professor of technology ethics at the University of Notre Dame. She added that the move also demonstrates the power of public and regulatory pressure, since the face recognition system has been the subject of harsh criticism for over a decade.

Meta Platforms Inc., Facebook’s parent company, appears to be looking at new forms of identifying people. Pesenti said Tuesday’s announcement involves a “company-wide move away from this kind of broad identification, and toward narrower forms of personal authentication.”

“Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices,” he wrote. “This method of on-device facial recognition, requiring no communication of face data with an external server, is most commonly deployed today in the systems used to unlock smartphones.”

Apple uses this kind of technology to power its Face ID system for unlocking iPhones.

Researchers and privacy activists have spent years raising questions about the tech industry’s use of face-scanning software, citing studies that found it worked unevenly across boundaries of race, gender or age. One concern has been that the technology can incorrectly identify people with darker skin.

Another problem with face recognition is that in order to use it, companies have had to create unique faceprints of huge numbers of people – often without their consent and in ways that can be used to fuel systems that track people, said Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.

“This is a tremendously significant recognition that this technology is inherently dangerous,” he said.

Facebook found itself on the other end of the debate last year when it demanded that facial recognition startup ClearviewAI, which works with police, stop harvesting Facebook and Instagram user images to identify the people in them.

Concerns also have grown because of increasing awareness of the Chinese government’s extensive video surveillance system, especially as it’s been employed in a region home to one of China’s largely Muslim ethnic minority populations.

Facebook’s huge repository of images shared by users helped make it a powerhouse for improvements in computer vision, a branch of artificial intelligence. Now many of those research teams have been refocused on Meta’s ambitions for augmented reality technology, in which the company envisions future users strapping on goggles to experience a blend of virtual and physical worlds. Those technologies, in turn, could pose new concerns about how people’s biometric data is collected and tracked.

Facebook didn’t provide clear answers when asked how people could verify that their image data was deleted and what the company would be doing with its underlying face-recognition technology.

On the first point, company spokesperson Jason Grosse said in email only that user templates will be “marked for deletion” if their face-recognition settings are on, and that the deletion process should be completed and verified in “coming weeks.” On the second, point, Grosse said that Facebook will be “turning off” components of the system associated with the face-recognition settings.

Meta’s newly wary approach to facial recognition follows decisions by other U.S. tech giants such as Amazon, Microsoft and IBM last year to end or pause their sales of facial recognition software to police, citing concerns about false identifications and amid a broader U.S. reckoning over policing and racial injustice.

At least seven U.S. states and nearly two dozen cities have limited government use of the technology amid fears over civil rights violations, racial bias and invasion of privacy.

President Joe Biden’s science and technology office in October launched a fact-finding mission to look at facial recognition and other biometric tools used to identify people or assess their emotional or mental states and character. European regulators and lawmakers have also taken steps toward blocking law enforcement from scanning facial features in public spaces.

Facebook’s face-scanning practices also contributed to the $5 billion fine and privacy restrictions the Federal Trade Commission imposed on the company in 2019. Facebook’s settlement with the FTC included a promise to require “clear and conspicuous” notice before people’s photos and videos were subjected to facial recognition technology.

And the company earlier this year agreed to pay $650 million to settle a 2015 lawsuit alleging it violated an Illinois privacy law when it used photo-tagging without users’ permission.

“It is a big deal, it’s a big shift but it’s also far, far too late,” said John Davisson, senior counsel at the Electronic Privacy Information Center. EPIC filed its first complaint with the FTC against Facebook’s facial recognition service in 2011, the year after it was rolled out.

Advertisement
Comment