Facebook Is Ending A Controversial Program
Facebook has announced that it's ending a highly controversial program.
This article is more than 2 years old
Facebook, now Meta, has long been under fire for how it handles user data. Facebook and its CEO Mark Zuckerberg are currently in court for both his and the company’s role in the Cambridge Analytica Scandal. They are also suffering under the accounts of various whistleblowers, most notably, Francis Haugen, whose accusations suggest that despite Facebook having knowledge of its improper collection and misuse of user data as well as its role in the spread of disinformation they ignored what they knew and made feeble attempts, if any, to rectify problematic areas of concern. Now, Facebook finally seems to be feeling the pressure from lawmakers and regulators and coming to the realization that they are not immune to legal scrutiny and that their actions have real-world implications. CNBC reported Tuesday that Facebook announced it would shut down its facial recognition software and delete the one billion individual facial templates that it acquired from it.
The facial recognition software, according to The New York Times, was initially put in place in 2010. Back then, Facebook asserted that it was integrating this software into its app so that its users could save time when tagging friends and cataloging photos. The software was designed to instantly recognize someone’s face off of the facial templates that Facebook stored on its servers.
However, over the past decade, there has been increasing cause for concern over the legalities and ethics, particularly in relation to individual privacy, over the utilization of software such as this. Essentially, it comes back to the basic question, Is using software such as this an egregious violation of personal privacy?
Facebook’s current answer to that question was revealed in a blog post put out by the vice president of artificial intelligence at Meta, Jerome Pesenti. Pesenti said there are “many concerns about the place of facial recognition technology in society,” and he continued that “every new technology brings with it potential for both benefit and concern, and we want to find the right balance.”
The decision to delete the data and cease use of the software comes not only as they navigate their current legal issues. The New York Times noted that it’s also after Facebook incurred a $5 million fine from the Federal Trade Commission in 2019 for privacy violations. As well as, according to the Associated Press, having had to pay out $650 million to settle a case in Illinois relating to the collection of biometric data (in this case facial templates) because of failing to obtain user consent.
Moreover, CNN pointed out that Pesenti detailed further in the blog post that even though Facebook (Meta) still believes its software to be a powerful tool and that the downside to dismantling it would directly affect the vision impaired who frequently utilize its benefits that, at present, it’s best to end use of the program. “We need to weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules,” wrote Pesenti.
Pesenti’s comments suggest that even though Facebook has elected to dissolve the use of facial recognition software and delete its accompanying user data, that they do intend to revisit the possibility of reimplementing it once they feel that sufficient regulations and guidelines have been solidly established. In fact, they are still considering using the technology for its picture-taking smart glasses that look like they were plucked right out of a spy movie.
However, for now, Facebook’s decision to stop using the facial recognition technology is what Woodrow Hartzog, a law and computer science professor at Northeastern University calls a “win…for ongoing privacy advocacy and critiques of tech companies.”