Data and metrics have become an important part of the work of public relations professionals. Whereas previously success was measured by the thud of the clip book on the table in the executive conference room—the bigger the “thud” the higher the success rate—now, PR pros are asked to identify, measure, and manage programs based on a wide range of data. And, for the most part, PR is better for it. Those who do develop solid measurement programs have real data they can point back to when asked about successes. This presents a twofold advantage—first and foremost, it means that a company can better map back which PR programs actually led to fulfilling business goals, and that in turn allows for adjustments going forward, leading to a virtuous cycle of improved PR programs. Secondly, it means that the public relations department now has evidence of the success of their programs so they aren’t quite as vulnerable a target when cost-cutting measures are deployed.
So how can anyone argue that too many metrics could be worse than none at all?
An overzealous or ineffectively targeted measurement program can produce large amounts of data, some of which won’t necessarily be of use at the end of a program. Having too much data at hand when it comes time to do end-of-program analysis will eat up time that could better be spent doing a deep dive into the areas that do matter. An example of this would be spending time collecting data on non-target populations. Unless there is a solid logic behind it that supports a business goal, collecting and analyzing data on a non-target population isn’t going to yield much data that moves a sales needle—and it could eat up a fair amount of time.
Another example of this is gathering information on anything that can be counted. There are so many different ways to collect and parse data available now, particularly when one factors in proprietary data sets from social platforms. So, for every social platform a campaign is run on, there’s a corresponding set of data that isn’t standardized with any of the other data collected. Add to that standard PR counting methods (clip counts, impressions, etc.) and suddenly the analysis team is awash in data that really doesn’t amount to much in the way of usable, actionable information.
The time factor
The examples listed above are relevant for a fixed-length program that has an end target for data analysis. While the data factor is significant in those situations, it can become downright unbearable for an ongoing program.
Setting up too many data points for ongoing programs means that not only is more data than necessary being collected, the reporting burden also grows. A monthly task that seemed manageable at the outset can rapidly become unmanageable as months and months of extraneous data accumulates. The end users of these reports become accustomed to looking at certain numbers, which is one way bad metrics like AVEs become embedded. The danger here is that presenting too much information clouds the picture: useful data gets lost or diminished in importance when there are too many metrics collected.
Is this really worse than collecting nothing at all? Ultimately, that depends on how the end user treats the information. If someone is so focused on trying to make sense of a mountain of data that he or she misses an opportunity, then yes—having nothing at all would be better than being distracted by numbers that aren’t useful, or trying to find the needle of useful information in a haystack of data. Losing sight of the big picture is a danger of getting too far into the weeds with data collection.
The ideal course is to first determine what information needs to be collected to support the goals established by the business or PR program, and focus on collecting that information well and analyzing it accurately.