Facebook’s new political labeling rules are catching some unexpected fish in its net. Marketing Dive, and a number of other publications report that both Walmart and Procter & Gamble had advertisements rejected by Facebook for not having “paid for by” disclaimers on content that Facebook—and one has to guess that this was an algorithm—deemed as being political in nature.
The Walmart ad included language about “bringing jobs back to America.” The Procter & Gamble ad touted its commitment to LGBTQ and inclusion.
Neither of these ads were touting candidates, pushing people to vote one way or the other, or advancing a specific policy proposal. These are the types of political advertising that would typically be required to carry a “paid for by” notification.
What appears to have happened is that Facebook has deployed algorithms to review advertising and then flag anything that touches on policy topics and treats those as political advertising.
The problem is a very wide range of topics that are standard for businesses to discuss and promote can be considered as touching on policy, and therefore are potentially “political.”
Illustrative Example: CSR
The nexus between PR efforts and what could be considered “political” digital advertising will be important for communicators to understand. As both the P&G and Walmart examples show, even the standard messages that are routine for PR can be construed as treading close to policy matters.
For example, corporate Social Responsibility (CSR) efforts have become a standard for many large corporations. By engaging in activities that promote a company’s efforts to be socially accountable, CSR provides a path for a company to burnish its reputation in a way that is completely within its control. Companies decide what CSR issues they want to promote and focus attention on, and how they want to accomplish the CSR goals that they themselves establish.
While many companies tout “green” environmental initiatives for CSR efforts, some tackle other topics. General Electric’s Foundation focuses on community and education, while Zappos works with organizations to provide books, school supplies, and shoes to kids in need.
When we look at CSR issues, however, including environmental ones, it’s easy to see how an overly broad algorithm net designed to identify political content may start catching even benign issues.
- A company touting the addition of solar panels to save energy and address climate change.
- A corporation that is working to address the healthcare needs in a community.
- An organization that volunteers time to encourage girls to learn computer coding.
All of these are worthy efforts, and each of them, framed in a specific way in advertising, could be construed to be dabbling in policy issues. Even the lightest, most positive advertisement that touts efforts on climate change, healthcare, or gender and diversity issues are touching on policy outcomes, and therefore could get flagged as political.
Issues aren’t always policy
Discussing issues that relate to policy matters doesn’t have to become political. It’s a fine distinction, and one that an algorithm probably cannot parse.
By turning this decision-making over to what is most likely an algorithm, we get a “blunt hammer” result, rather than a “using a scalpel” result that a human decision-maker would likely provide. (Caution is advised here because it is possible that even a person could confuse some of these lines.)
Election laws that govern disclosure requirements can be complex. It’s likely that in order to navigate this process effectively, Facebook might need to task review of these advertisements to human beings, rather than relying on AI and algorithms.
We see this time and again: for now, human judgment still has a significant edge over automation when it comes to making calls like these. It is particularly important for brands to know and understand how their advertisements are being parsed by Facebook, so that instances such as these are isolated ones.