The Facebook page of U.K. Prime Minister Boris Johnson.
Ben Stansall | AFP | Getty Images
“Fake news” probably seems like old news at this point, but it’s still a major worry for campaigners as Britain approaches its third general election in less than five years.
U.K. voters are set to head to the polls Thursday in a crucial election that will shape the future of the country and its contentious exit from the European Union. And the role played by technology couldn’t be more significant.
Last month, the ruling Conservative Party attracted criticism for renaming one of its official Twitter accounts to “factcheckUK” during a TV debate, masquerading as a fact checker while tweeting out pro-Conservative posts. Just two days later, the party launched a spoof website purporting to be the Labour Party’s manifesto while attacking its pledges.
More recently, Labour claimed to have obtained leaked trade documents showing the U.S. wanted complete market access to the British National Health Service after Brexit. However, the source of those documents has itself become the source of controversy after Reddit linked a post on its site that revealed them with a Russian interference campaign originally discovered on Facebook.
Tech giants have stepped up their efforts in the wake of the 2016 U.S. presidential election to stamp out attempts at manipulating online political discourse. But in a post-Cambridge Analytica world, some experts fear the use of targeting algorithms and misinformation in political advertising could give rise to the same problems of the past.
Should political ads be banned online?
Organizations like internet pioneer Tim Berners-Lee’s Web Foundation and privacy-focused web search company Mozilla have called for a “moratorium” on political ads in the U.K. in the current election cycle. While Web Foundation CEO Alex Lovett doesn’t think online political ads should be banned outright — a move he likened to “taking a sledgehammer to crack a nut” — he says that allowing them right now would be a risk to democracy.
Twitter has taken a stand on the issue, banning all political advertising from its platform last month. Google meanwhile has said it will limit targeting for political ads to basic demographic information like age and gender — effectively a ban on what’s been referred to as a “microtargeting.” But Facebook, one of the biggest platforms for online advertising, has stood by its decision to accept political ads, even if they contain false information.
“Targeting is actually more important than focusing on the content itself, particularly in a political context,” said Paul-Olivier Dehaye, the co-founder of PersonalData, who gave evidence to the U.K. Parliament on the Cambridge Analytica scandal last year. “Even though platforms could legally do a lot, given that they are private actors, they are loath to limit speech. Limiting targeting is not a limit on speech, but on ways to reach.”
Dehaye said that Facebook is “the most obscure channel” when it comes to political advertising, “as ads can be targeted to individuals in an opaque way, then amplified peer-to-peer through reshares.” He claimed the company can’t afford to take “half measures,” adding: “It needs to ban political ads.”
One thing Facebook has done to at least try to ensure transparency is to create a library of ads where users can see who paid for them and roughly what demographic they were targeted toward. The Ad Library also shows how much each party is spending on social media campaigns.
According to Facebook’s data, the Conservatives spent £81,897 in the seven days ending Dec. 5, while Labour spent £178,379 and the Liberal Democrats spent £175,152. But the feature has proven vulnerable, as thousands of political ads temporarily disappeared from the archive with just days to go before the election.
“In the end, I think more radical and reliable transparency, with some consistency in policy between the platforms, is the way to go,” said Sam Jeffers, co-founder of political ad tracker Who Targets Me. “Political advertising on the internet is probably here to stay, so we need a way to ensure that a robust civil society can exist around it.”
Cambridge Analytica 2.0?
Pinning down how much political groups truly pay on digital ads has also been complicated by the emergence of so-called shadow campaigns, which obscure the source of funding for some political ads. A former executive of the official Brexit campaign group Vote Leave was accused of buying social media ads in favor of a smaller party, the Greens, in order to split the left.
Spokespeople for Facebook, Google and Twitter were not immediately available for comment on this story. Facebook has said it “considered whether we should ban political ads altogether,” pointing out that they account for just 0.5% of the firm’s revenue, “but we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents.”
It’s unclear whether a Cambridge Analytica-style situation could emerge again. It was revealed last year that the political consultancy had improperly gained access to Facebook user data to target voters. Now the subject of a Netflix documentary, the debacle led to Facebook being hit with a huge $5 billion fine in the U.S. and a £500 million ($657 million) fine in the U.K.
Dehaye, who featured in that documentary, said it’s hard to predict whether we could see another Cambridge Analytica-like event. “We still don’t know how bad it was during the U.S. election and Brexit, because the platforms have been so obscure about it.”
The leaking of user data came from a personality quiz app, and Facebook has since pulled tens of thousands of apps from its platform in the wake of the breach. The Cambridge Analytica scandal heightened fears over how such specific behavioral data could be used to influence voting. But despite those fears, Jeffers questioned the degree to which people are swayed by such targeting.
“We’ve seen less of that this time around than even in 2017,” he said. “I think what you see instead is this recognition, which is quite Trump-like, that when there’s no gatekeeper, you can say what you want, and the Conservatives in particular have pushed very hard with that.”
Meanwhile Pascal Crowe, data and democracy project officer of internet freedom campaigners the Open Rights Group, says more attention should be paid to the data held by political parties rather than tech platforms. The group has created an online tool “to show the data role that political parties themselves are playing.”