Close results from the US election last month show a split nation.
Many point to the spread of so-called “fake news”, or misinformation, as the reason for the divide. While misinformation has long existed, social media platforms provide fast and wide access to deceptive and divisive news stories. Find out how we got here, the future of fake news, and what challenges President-elect Joe Biden will have to face to ‘heal’ the nation.
2016: The social media election.
Both Hillary Clinton and Donald Trump saw a new political battleground during the 2016 election: social media. To assert their social media footprint, each campaign employed an ‘army’ of bots.
Bots are designed to post content automatically and without human involvement. While most bots are constructed to be helpful — like the Netflix bot which tweets out new releases on the platform — Trump and Clinton’s bots were created to ‘manipulate’ online opinion. They did so by interacting on Twitter like a normal human user, including “posting, following, replying, and retweeting.” Likeness to human users didn’t end there — Clinton and Trump bots “resembled real people”, complete with a bio and profile photo.
Both candidates were estimated to have over a million fake followers in 2016, many of them bots. To make a further impact on social media, bots from each campaign would form ‘botnets’, bot networks that work “in concert” by following each other and sharing similar content. Clinton and Trump bots were particularly active during pivotal moments of the presidential campaign, like influencing opinion on “who won a debate.”
Naturally, bots employed by the competing campaigns shared differing opinions, resulting in “echo chambers” with less bipartisan news — and fertile ground for bots to play a “major role” in sharing misinformation.
Election invasion, hacking or interference?
Bots employed by Trump and Clinton are not what’s remembered from the 2016 Election. At least not as much as Russian troll farms. Trolls farms are an organisation of internet users working to spread misinformation online.
In the 2016 Presidential election, Russian nationals working for the Internet Research Agency(IRA) – a troll farm – were allegedly employed to “sow discord” within the United States. IRA employees reportedly used “stolen identities” of Americans to spread fake news on social media, everything from popular Facebook groups on the political Left and Right to organizing in-person rallies. The effect was so successful, Clinton supporters claim IRA’s Russian troll farm swayed the election in Donald Trump’s favor.
Other countries soon began following the Russian model. By 2019, governments in at least 70 countries were using bots, fake social media accounts, and troll farms to shape public opinion. Countries using these forms of disinformation include smaller countries like Eritrea to the United States, Great Britain, and Germany. .
2020 and beyond.
In 2020, misinformation on social media is even harder to detect. Artificial intelligence (AI) has provided bots with improved ‘language’ skills, allowing for a ‘more subtle’ human-like approach when influencing opinion.
Some warn this type of bot technology will advance far enough to “drown out” human discourse on social media. This runs parallel with a rise in censorship of whistleblower leaks on social media, which DiEM25 has written about previously.
Troll farms are also becoming harder to distinguish from standard political campaigning. A month before the US election, news broke that a group of young conservative activists in Arizona was hired by the Trump-connected Charlie Kirk to operate a troll farm. Kirk, however, denies the comparison and equates the online posting to campaign “fieldwork”.
A similarly ambiguous label — “experimenting” — was applied when Democratic Party-connected US cybersecurity firm New Knowledge employed tactics used by the Russian Internet Research Agency during an 2017 election for the US Senate. Like Clinton followers in the aftermath of the 2016 election, some believe the tactics of bots, troll farms, and other forms of misinformation were effective enough to sway the outcome and oust the incumbent.
As a result, governments have found overwhelming the public discourse with misinformation is more effective than outright censorship.
Rescuing social media from misinformation.
At DiEM25, we’ve long argued for a public takeover of social media.
This means ending the testing of social media on the populous without some gain for those using the platform. Rather than profits only for private investors, society can take part and democratize the global conversation.
On the flip side, this does not mean censoring views we do not like. On the contrary. The monopolistic approach held by companies like Facebook, Twitter, and other social media platforms reward misinformation. How? By constructing an algorithm that keeps users in an echo chamber — only to then sell off information on these users to nameless advertisers, if not worse.
By spreading the wealth created by social media platforms, along with stronger digital rights for users, we as a society can build and profit rather than remain exploited serfs tilling the digital landscape.
Photo Source: Pexels.
The views and opinions expressed here are those of the author and do not necessarily reflect DiEM25’s official policies or positions.
Do you want to be informed of DiEM25's actions? Sign up here