The Conservative Party used disinformation tactics with a “new level of impunity” during last year’s general election, a report has found.
Researchers from King’s College London warned that the campaign had risked undermining public trust during the coronavirus pandemic.
Their report said Tories had “employed overt disinformation” to secure votes, such as by altering a video of Sir Keir Starmer and posing as a fact-checker on Twitter during a leaders’ debate.
“Even if some of these tactics are not novel, the impunity with which they were employed appears new, at least in the UK,” it added.
“When found out, Conservative Party representatives were unapologetic for rebranding their Twitter account as a fact-checking site, and for editing video footage of the interview with Sir Keir.”
The clip was edited to show the Labour leader, who was then the shadow Brexit secretary, falling to answer a question on the EU when he had responded to it in a live television interview.
The King’s College research said that while government policy had focused on social media as a source for disinformation, much of it was being “spread by domestic political actors” and news outlets.
Last year, a public information campaign was launched to “help the public spot false information” using a five-step list of checks.
Researchers found the Conservative Party had violated four of the criteria during the election campaign.
The report said it had deliberately shared “poorly formatted and low-quality” posters on social media, including a widely-ridiculed tweet using the comic sans font, to gain attention through critical coverage.
Author Francesca Granelli, from King’s College’s Department of War Studies, told The Independent: “What’s happening is they are seeing other people’s elections and certain traits are being picked up and run with. Some things are good and some are bad.
“I’d like to think the UK is not as bad as in other countries – the US is leading the way with that very muddy space.”
In separate research, the fact-checking organisation First Draft found that Labour, the Conservatives, and the Liberal Democrats all published misleading advertising during the campaign.
But the Tories were “by far the most frequent”, with 88 per cent of their most shared online adverts between 1 and 4 December containing misleading information, compared to 6.7 per cent for Labour.
King’s College researchers said it was “difficult to calculate” whether the efforts had an impact on the election result, which saw Boris Johnson returned as prime minister with a majority increased by 47 seats.
They warned that the coronavirus pandemic could operate as a measure of public trust in the governing party, adding: “On a daily basis, the government is trying to persuade British citizens that its reported death tolls are accurate.
“It wants citizens to believe the message that it is ‘succeeding’ in controlling the epidemic, despite its recent electoral record of using disinformation tactics and admitting it unapologetically.”
A spokesperson for the Conservative Party said it rejected what it called “wild assertions”.
“All Conservative Party advertising was labelled, following best practice on digital imprints,” he added.
“We are delivering on the promises the British public elected us on, getting Brexit done, investing in our public services and levelling-up across the country.“
The research classed disinformation as a deliberate lie or misrepresentation, while misinformation can be created or spread accidentally.
Dr Granelli said there was a perception that disinformation was “happening at a more industrial rate than has ever happened before” amid a global rise in online communications.
But she warned that it was hard to measure the impact on voting behaviour because the reasons for people’s decisions, or behind the views they form, were not fully known.
Dr Granelli noted that the period since the 2016 EU referendum, which has seen two general elections, two Conservative and two Labour leadership elections, has meant party-political campaigning “continuing for a lot longer than it normally would”.
The report called for more research to be done across multiple platforms to examine how false information spreads and takes root, calling claims that “Facebook, or bots, or the Russians are the core threat” a “misdiagnosis” of the problem.
“A lot of the time, people are looking for information that reinforces their beliefs,” Dr Granelli said.
“Social media has been the focus [of anti-disinformation campaigns] but it’s not the sole area where people get news.
“It might be their friends, parents, newspapers, celebrities or musicians.
“It’s almost a perfect storm – the old gatekeepers have been removed and we haven’t found the right way to replace them.”