Facebook Updates Ad Targeting Options to Reduce Discrimination
By Shannon Doyle and Dan Goldstein
Facebook is removing over 5,000 targeting options as part of its effort to protect users from discriminatory advertising. The affected ad targeting options are primarily demographic in nature, with an emphasis on ethnicity and religion. Although Facebook recognizes the vital role that these targeting options play in reaching relevant audiences, the company has made this change to minimize the risk of abuse in ad targeting.
In recent weeks, Facebook has been accused of enabling discrimination in employment and housing by allowing advertisers to exclude audiences based on race, gender, and other sensitive factors in discriminatory ways. Advocates claim that Facebook has enabled clear violations of federal law in housing and employment, specifically The Fair Housing Act of 1968 and Civil Rights Act of 1964.
Contrary to popular belief, the multi-billion dollar social media corporation first began taking steps toward equality in its ad targeting options in 2016, around the same time as the ProPublica Facebook scandal. The media hyperventilated over the revelation that Facebook permitted advertisers to exclude users by "Ethnic Affinities", which are defined by the pages and posts Facebook users have engaged with. Months later, the news of being able to target "Jew Haters" on Facebook ads flooded the internet, causing another uproar.
It also came to light that targeting options enabled advertisers to discriminate against those interests affiliated with a specific race or ethnicity. An example used by AdAge explained that an advertiser had the option to exclude people interested in Passover. While advertisers were not able to exclude people who are Jewish, they were able to exclude those interests that may pertain specifically to people who are Jewish. So, although Facebook does not categorize people by race or ethnicity, these characteristics can be singled out through interest-based targeting that is relevant to their identification.
In response to the public scrutiny, Facebook took a strong stance on obtaining a sense of safety and civility for its users. The movement began with subtle advances to Facebook's ad review process, that focused on educating advertisers. They hired additional staff to review ads and began working towards advancing their machine learning technology in an effort to catch ads that violate the policies before they run. Additionally, Facebook took preventative measures by adding prompts to the ad platform that outline policy guidelines. Facebook's ultimate goal was to utilize machine learning technology, additional staff, and prompts to circumvent discriminatory ads from being placed on Facebook.
A year ago, Facebook began requiring advertisers placing ads relating to housing, employment, or credit to certify their compliance with the anti-discrimination policy and law. In the near future, all US advertisers will be required to complete a certification through the Ads Manager tool. This is intended to prevent discrimination through exclusion targeting in ads, as highlighted by recent the HUD investigation.
The threat of exclusion targeting stems far beyond the housing, employment and credit industries, with ad discrimination seeping into the cracks of every industry. For this reason Facebook removed "thousands of categories from exclusion targeting", focusing "mainly on topics that relate to potentially sensitive personal attributes, such as race, ethnicity, sexual orientation and religion", as stated in the Facebook blog published in April. The adjustments made to Facebook's targeting options were a response, "based on findings and feedback from privacy, data ethics and civil rights experts, as well as charitable and advocacy organizations."
In a continued effort to respond to the threat of discriminatory advertising, Facebook announced this week it will be "removing over 5,000 targeting options to help prevent misuse". Although the social media platform recognizes the benefits of such targeting to the success of business advertisement, it is taking a risk-averse approach to limit the risk of discriminatory ads from being run on the platform.
Since this new policy applies to all advertisers, it may actually hurt consumers. The problem is that some of these ad targeting options help advertisers target specific audiences for legitimate purposes. As a result, this change may actually hurt consumers by making it harder for legitimate advertisers to reach individuals who could benefit from specific products and services.