Making an analogy with print media, it reads kind of like
a National Enquirer headline.
But, it was shown on a Facebook page as
part of somebody's paid make-fun-of-the-Clintons advertising campaign.
This particular example is obviously fake to many users,
but not necessarily to all users.
The point is this:
need to make the effort to identify each news-like item as fake, or not,
because Facebook itself is not making the effort.
February 25, 2017:
I'm still clueless about why that particular vicious add was served to me.
This article in Scout
The Rise of the Weaponized AI Propaganda Machine
(Berit Anderson and Brett Horvath)
suggests how it may have happened.
For Analytica, the feedback is instant and the response automated: Did this specific swing voter in Pennsylvania click on the ad attacking Clinton's negligence over her email server? Yes? Serve her more content that emphasizes failures of personal responsibility. No? The automated script will try a different headline, perhaps one that plays on a different personality trait -- say the voter's tendency to be agreeable toward authority figures. Perhaps: "Top Intelligence Officials Agree: Clinton's Emails Jeopardized National Security."
Much of this is done through Facebook dark posts, which are only visible to those being targeted.
Online personality analysis from clicks? Dark posts?
This is so much worse than I could have imagined.
Sounds like an evolution from Web 2.0 -- instead of suggesting
vacations we might want to take in our search results, the bots are
going to browbeat us with ads until they finds a wedge issue that will
make us want to go vote for the candidate dear to the
heart of the paying client.
And Cambridge Analytica, with their botnets, is the Brit version of
a Russian troll farm? Sorry, but yuck.