Increasingly, PR pros are relying on the metrics provided by social platforms as a core component of their measurement practices. It makes sense: the social platforms are providing user data, which is invaluable. If they are the owners of the information, the data are deemed to be reliable. Additionally, it’s nice to have an additional component on which to base things like how much to spend on the “P” section of the PESO model.
Facebook recently disclosed that the metrics it has provided on total time viewing video advertisements on its platform are flawed. The data provided has “vastly overestimated” viewing time for more than two years. For marketers who relied on this data, this presents a significant problem.
According to a piece in the Wall Street Journal, Facebook initially disclosed the problem with its viewing metric in a post in a section directed to advertisers. After some additional prodding by agency executives, Facebook provided more detail, such as disclosing that the metrics provided “likely overestimated average time spent watching videos by between 60% and 80%” for as much as two years.
The WSJ piece points out that the incorrect calculation of this metric likely led marketers and advertisers to make spending decisions on Facebook advertising with faulty data. It also notes that this incident could raise questions about so-called “walled gardens” in general—since Facebook and Google keep their data very close to chest, learning of one “off” metric invites closer scrutiny of other metrics. It has already led to calls to increase transparency:
The Publicis note said, ‘This once again illuminates the absolute need to have 3rd party tagging and verification on Facebook’s platform. Two years of reporting inflated performance numbers is unacceptable.’
Similarly, Martin Sorrell, CEO of WPP Plc, has called for independent evaluation of data, characterizing this type of activity as “marking their own homework.”
Despite the outcry, there are lessons to be learned. First, don’t rely solely on data delivered by a platform. There should be more to any measurement program than simply looking at a single data point that was provided by Facebook or any other platform. Second, look deeper than surface-level information. These figures showed how many people were viewing video, removing completely from the data set any views that were less than three seconds. So the numbers were inflated; but the real trouble is if this information was being used to measure awareness. If awareness was a goal, then the inflated numbers led to incorrect assumptions. However, if it is tied back to sales, or donations, or downloads, then it would likely have been evident that high views weren’t translating to the measured activity.
This point was made very clearly in a post on TechCrunch. After interviewing a number of marketers, the post included this line: “…the consensus was that there are far more important measurements to look at, and the error wouldn’t necessarily impact spend if marketers looked at analytics more holistically.” The post also notes that concerns about a lack of third-party verification are overblown, as there are options from Nielsen and Moat that provide verification of video viewing.
Ultimately, this kerfuffle won’t break Facebook. They’ve apologized for the error and have put in place a fix. A smaller platform with less daily traffic would probably suffer more after an incident like this, but the bottom line is that Facebook’s audience size protects it somewhat. The key takeaway is that measurement programs should be precisely that: well-rounded programs with a number of data points to collect and assess, so that no single figure—especially one provided by a platform—should be the sole point on which the program relies.