On Wednesday, Facebook COO Sheryl Sandberg outlined steps the company is taking to address its latest ad-targeting controversy.
Last week, ProPublica revealed that Facebookas ad-targeting alternatives included the ability to target ads to people who had listed aJew hatera as their field of study and aNaziPartya as their employer. In reaction, Facebook removed four ad-targeting fieldsA populated by an algorithm based on info people entered into their Facebook profiles. Education, job, field of study and task title targeting were disabled for new campaigns.
Now, the company is instituting amore manual review of new ad targeting options to help prevent offensive terms from seeming, a according to a Facebook post by SandbergA( embedded below ). The company has also re-enabled some of the ad-targeting alternatives that are based on self-reported user data.
” After manually reviewing existing targeting alternatives, we are reinstating the approximately 5,000 most commonly used targeting words a such as anurse, a ateacher’ or adentistry.a We have built sure these gratify our Community Standard ,” said Sandberg, adding, A” From now on we will have more manual review of new ad targeting alternatives to help prevent offensive words from seeming .”
The company also plans to create a program for people to areport potential abuses of our ads system to us directly, a wrote Sandberg.
Additionally, Facebook said it is aclarifying our ad policies and tightening our enforcement process to ensure that content that goes against our community criteria cannot be used to target ads, a according to Sandberg.
Facebook will step up existing enforcement against targeting” that immediately assaults people based on their race, ethnicity, national origin, religious affiliation, sex orientation, sexuality, gender or gender identity, or disabilities or cancers “.
Itas unclear what specific steps Facebook is taking to that end, aside from the ones laid out in Sandberg’s post. A Facebook spokesperson said the company will release an update to its policies that outlines those steps sometime in the future.
Sandberg’s post in full :
Last week we temporarily incapacitated some of our ads tools following news reports that slurs or other offensive speech could be used as targeting criteria for ad. If someone self-identified as a aJew-hatera or said they analyse ahow to burn Jewsa in their profile, those words indicated up as potential targeting options for advertisers.
Seeing those words induced me disgusted and frustrated a disgusted by these sentiments and disappointed that our systems let this. Hate has no place on Facebook a and as a Jew, as a mother, and as a human being, I know the damage that can come from detest. The fact that hateful words were even offered as options was totally inappropriate and a fail on our proportion. We removed them and when that was not entirely effective, we incapacitated that targeting section in our ad systems.
Targeted advertising is how Facebook has helped millions of business grow, detect customers, and hire people. Our systems match organisations with potential customers who may be interested in their products or services. The systems have been particularly powerful for small and medium-sized companies, who can use tools that previously were only available to advertisers with big budgets or sophisticated marketing teams. A local restaurant can shoot video of their food prep with just a phone and have an ad up and running within minutes and pay only the amount needed to show it to real potential clients. Most of our targeting is based on categories we provide. In order to allow businesses a especially small ones a to detect customers who might be interested in their specific products or services, we offered them the ability to target profile field categories like education and employer. People wrote these profoundly offensive terms into the education and employer write-in fields and because these words were used so infrequently, we did not discover this until ProPublica brought it to our attention. We never aimed or anticipated this functionality being used this way a and that is on us. And we did not find it ourselves a and that is also on us.
Today, we are announcing that we are strengthening our ads targeting policies and tools.
First, we’re clarifying our ad policies and tightening our enforcement process to ensure that content that goes against our community standards cannot be used to target ads. This includes anything that immediately attacks people on the basis of their race, ethnicity, national origin, religion affiliation, sexual orientation, sex, gender or gender identity, or disabilities or diseases. Such targeting has always been in violation of our policies and we are taking more steps to enforce that now.
Second, we’re adding more human review and oversight to our automated procedures. After manually reviewing existing targeting alternatives, “weve been” reinstating the approximately 5,000 most commonly used targeting words a such as anurse, a ateacher’ or adentistry.a We have attained sure these gratify our Community Standard. From now on we will have more manual its consideration of new ad targeting alternatives to help prevent offensive words from appearing.
And third, we are working to create a program to encourage people on Facebook to report potential abuses of our ads system to us immediately. We have had success with such programs for our technological systems and we believe we can do something similar with ads.
We hope these changes will avoid abuses like this going forward. If we detect unintended outcomes in the future, we will be unrelenting in identifying and fixing them as quickly as possible. We have long had a firm policy against loathe on Facebook. Our community deserves to have us enforce this policy with deep caution and care.
Read more: marketingland.com