Following its April 2017 white paper investigating the potential misuse of its platform, Facebook announced yesterday that it would be instituting new make further efforts to detect and block content — and accompanying ad spends — from fake accounts.
It’s the most recent outcome of investigations into potential use of the social network during the 2016 U.S. presidential election to interfere with voter outlooks and decisions. As the official statement points out, “these are serious claims, ” and Facebook has been reviewing a number of activities on its site to aid a broader investigation — and among them is all ad spend between June 2015 and May 2017.
The findings: During that time, a total ad expend of approximately $100,000 — which was associated with about 3,000 ads — was discovered to be connected to about 470 “fake” accounts and Pages — basically, those that contravened Facebook’s policies and weren’t associated with an official brand or organization. What’s more, the managers behind these Pages were found to be interlinked, and based in Russia.
But this isn’t a political post.
Rather, to us, this news is a major signifier of the effectiveness of social media advertising, as well as a reinforcement of its presence in our lives. The median user expends anywhere between 35 and 50 minutes on Facebook per day. That much exposure is often equated to many eyes on a devoted brand, cause, or organization by advertisers — but in addition to forming a strategy behind that ad content, this latest development shows that it also requires some advanced planning of knowing where it might appear.
“I think it’s telling that people who wanted to influence the election took to Facebook. It was clear to them it’s the best route to influence the most people, ” says Marcus Andrews, a product marketing administrator at HubSpot. “Ads aren’t at fault here — the manipulative humen who abuse them are.”
What does that mean for marketers? Let’s go over what we know.
What Facebook’s Latest Information Operations Announcement Means
What We Know
For many months now, Facebook has been emphasizing its efforts to prevent and reduce the use of its platform to distribute fake accounts and misinformation. Some are rooted in statutes that, as a business, Facebook must follow to aid in high-level investigations, while others are the result of “protecting the integrity of civic discourse, ” as the statement reads, that require advertisers to follow certain rules. One major step in that path to strictly authentic information sharing is the banning of Pages that are found to repeatedly distribute and promote fake news on Facebook.
But part of those efforts have to be preventative — which means implementing technology and practices to keep accounts that engage in this sort of activity from being created and be permitted to publicize in the first place. That entails generating ways to determine the nature and intention to figure out if Page gratifies that criteria as soon as it’s created. The provide answers to that, Facebook largely believes, lies in automation and other digital improvements. Among them, the statement outlines 😛 TAGEND “applying machine learning to assistance limit spam and reduce the posts people see that link to low-quality web pages; adopting new ways to fight against disguising the true destination of an ad or post, or the real content of the destination page, in order to bypass Facebookas review processes; reducing the influence of spammers and deprioritizing the links they share more frequently than regular sharers; reducing stories from sources that consistently post clickbait headlines that withhold and exaggerate info; and blocking Pages from advertising if they repeatedly share narratives marked as false.”