In the wake of the U.S. presidential election, the media and blogosphere have been awash in reports and commentary claiming that fake news stories – especially those shared on Facebook – may have had an influence on the eventual outcome.
While such a claim is virtually impossible to prove, it is nevertheless disconcerting that major social networks clearly play a role in enhancing the distribution of false reports.
While Facebook and Twitter must address how to avoid having their algorithms duped by purveyors of prevarication, public relations pros must also take notice of this phenomenon and determine how to best address it in their own media monitoring and measurement efforts.
Drive for automation comes at a cost
In recent years, there has been an increasing drive toward automation for tracking and analyzing traditional and social media coverage.
Indeed, when I founded CustomScoop in 2000, part of the motivation was to introduce technology to an industry that had long-relied almost solely on human labor to identify and deliver relevant news clips.
The fact is that technology has done wonders for the media intelligence industry, automating many routine tasks and helping to gather more information more quickly.
But automation can only take you so far.
At some point, humans inevitably enter the picture – either as consumers of reports or as analysts building on top of the raw data the software has compiled.
As an industry, the drive for fancier dashboards, more interactive tools, and flashy graphics often overshadows the hard work done by engineers to ensure data quality.
These shiny objects also mask the fact that human review remains an essential part of the process to ensure accuracy and quality of insight.
Software saves time while humans add value
None of this should be read to disparage technology – in fact, I’m a technologist at heart.
Without today’s powerful data mining tools, efficient spiders, speedy databases, and innovative software-based analytics, it would be virtually impossible to keep up with the flood of news that inundates communicators every day.
While we continue to wait for IBM’s Watson or some other artificial intelligence breakthrough to revolutionize the way we consume information, we must determine how best to provide our key stakeholders and clients with the results that they demand.
That’s where humans come into play.
Expert analysts can comb through automated and technologically augmented data to produce daily executive news briefings, detailed PR measurement reports, and meaningful analysis and insights.
Technology even makes this process faster and easier.
In the old days, analysts would copy and paste headlines and source data into briefing memos, while today that task often can be done by simply checking a box in the software solution of choice.
Understanding technology’s limits makes us more effective
Most public relations professionals have come to understand that media monitoring and measurement technology has its limits.
For years, we have talked about how computers can’t accurately judge sentiment when sarcasm is involved or with the language limitations and shortcuts created by 140-character message limits.
Now we know that fake news represents another threat to accuracy.
Making it even more complicated is that even humans may not be able to readily identify a fake news story as such.
I routinely see smart friends share false news on Facebook – not out of any maliciousness or deceit but merely because the news seemed so true it was easy to share.
And what about those stories where the truth itself may be elusive?
Especially when dealing with controversial topics – like politics – some people are all too ready to label news they don’t like as “false.”
Without human review, we’re left to question the accuracy of our reports.
By adding human curation to your media monitoring and measurement arsenal, you will overcome the limitations of technology and ensure accurate insights that help you make smarter decisions.