November 15, 2018

Helping PR pros make smarter decisions

“Fake influencer” takes on new meaning with CGI accounts

“Fake influencer” takes on new meaning with CGI accounts

If fake influencers are a problem for PR, what are we going to do with AI-generated fake influencers?

There’s a saying that you should “fake it until you make it”—basically, play the part until you’re big-time.

Thus far, this has been one of the bigger problems with online influencers. They’ve purchased bulk social media accounts to pump up follower numbers and use those false totals to command bigger dollars when striking deals with brands.

It can be hard or easy to suss this out, depending on a few factors including how much time you’re willing to put in to do the research and how “big” the influencers actually are.

Brands are left wondering if their “influencers” are worth the money they’re spending.

A recent post on Axios turns this notion on its head and raises a whole lot of questions for PR and communications professionals.

Fake, or fake-fake?

The social media influencers in the Axios piece aren’t “fake” because they are falsely inflating their follower numbers (at least that we know of). They are “fake” because they literally are not real people.

Two “Instagram models,” @lilmiquela (who has 1 million followers) and @bermudaisbae (who has 64,000 followers) are both computer-generated images; essentially, they are Sims characters with social media followings.

Now, it’s not that unusual for a brand to use a character or product as the “voice” of their social channels. One of my favorite Twitter accounts is voiced by a cookie (the edible kind, not the tracking kind).

What is different here is that these fake characters aren’t brand-identified—no one knows to whom the accounts/characters are connected, although according to the Axios piece there is some indication that an AI firm might be behind one of them.

Disclosure and transparency

These “influencers” have followers and have advertising deals. They promote products and ideas and can have a real effect on online discourse.

But again, they aren’t real people and we don’t know who or what is behind the accounts.

The Axios story notes that one of the “models” has mentioned using a specific hair-care product from an actual company.

This is basically a cartoon crediting a hair-care product.

Is it a real endorsement, or is this satire?

When Tony the Tiger says that Frosted Flakes are Grrrrrreat!, we know and hopefully fully understand that this is a cartoon fronted by the Kellogg company, pitching a cereal. If Tony were to shoot a web video that says Frosted Flakes are what make his coat shiny and that a small bowl every day will help you lose weight, the FTC knows exactly who to go to if questions arise about these claims.

Who would they go to for claims made by these “models”?

Who gets the money for their endorsement deals?

Who is responsible if they ignite a real controversy?

Social channels and societal impact

If there’s one thing every social channel should be deeply considering on a fundamental level, it’s the potential for societal impact beyond just fun and games.

Mark Zuckerberg has essentially acknowledged that the spread of disinformation on Facebook has had negative societal implications.

Fake news, winding up the outrage machine—it all has an impact.

Now we have fake influencers who hold opposing and somewhat divisive opinions (one of these fake models is a progressive, the other is pro-Trump).

And yet, their origins and who controls the accounts are opaque—we don’t have information we need to make decisions on whether we’re being goaded to react and respond, or if this is some kind of performance art.

This matters to communicators

The level to which this will matter depends heavily on whether or not the use of fake people moves beyond what we’re seeing in this particular story.

Sure, there are completely benign uses: raising awareness, or using this type of decoy to pull people into a narrative that serves a larger purpose—say, in this case, using the opposing political viewpoints as a way to develop a story that shows while we might have differences we can all get along.

There is also plenty of potential for far more nefarious behavior.

If you’re in public affairs PR and working on a legislative or regional initiative, how would you respond to a fake influencer chiming in while opposing your efforts?

How does a brand respond to a popular critic who isn’t human?

What do you do if a CGI character touches off a brand crisis?

These questions aren’t far-fetched, and they are ones that communicators are going to be grappling with sooner or later.

Ad Block 728

About The Author

Jennifer Zingsheim Phillips is the Director of Marketing Communications for CARMA. She is also the founder of 4L Strategies, and has worked in communications and public affairs for more than 20 years. Her background includes work in politics, government, lobbying, public affairs PR, content creation, and digital and social communications and media analysis.

Related posts

Ad Block 728
8 Shares