Big Tech must step up now to combat misinformation on midterms

We are just four months away from the 2022 midterm elections, and more than 100 candidates across the country have embraced Trump’s “Big Lie.” These candidates not only claim that the 2020 race was rigged, but also cast doubt on the legitimacy of the upcoming November election.

In 2020, allegations of voter fraud were widely reported on social media. President Trump regularly tweeted election lies and groups used Facebook to coordinate the January 6 insurrection. So far, however, reports indicate that social media companies may not be prepared for the next spate of election misinformation.

As Facebook pivots to focus on the metaverse, for example, the company reduced the number of employees focused on electoral integrity from 300 to 60. Experts fear that this lack of resources and attention, combined with the magnitude of the midterm elections, could exacerbate the problem. In fact, internal research shows that Facebook has a hard time spotting misinformation in local environments, like the ones you see on midterms.

Instead of withdrawing electoral integrity measures, platforms should improve their electoral safeguards. As researchers studying the intersection of social media, politics, and democracy, here are four questions we’re looking at.

How will social networks respond to threats to democratic legitimacy?

Right now, one faction of the Republican Party has decided that election results, at least when they lose, are not legitimate. As a result, platforms must not only consider how to temper electoral misinformation, but also how to handle candidates who question the legitimacy of the process itself.

Platforms have numerous ways to moderate misinformation. Research shows that they all work, and don’t, to varying degrees. For example, several studies indicate that fact-checking can reduce belief in misperceptions. However, these effects may decay over time. Another study found that attaching a warning label to or blocking interaction with Trump’s 2020 election misinformation tweets was not associated with a reduction in their spread, both on Twitter and on other platforms. And while recent work shows that precision nudges decrease belief and misinformation sharing, it has yet to be tested at scale across platforms.

Beyond the content, the platforms must also deal with users who spread electoral falsehoods, many of whom are political candidates. With the exception of Trump, companies have been largely reluctant to ban candidates who post misinformation. In fact, high-profile users like celebrities and politicians are essentially immune to Facebook’s content moderation rules.

There is no silver bullet to stop misinformation on social media. Instead, platforms must work together to employ a variety of tools to slow its spread, fairly and equitably punish users who repeatedly violate the rules, and maintain trust by supporting open democratic discourse. The European Union’s new anti-disinformation code, which several platforms voluntarily signed on to in June, is an encouraging start.

How will companies prevent extremists from organizing on their platforms?

Social networks do not have a monopoly on the dissemination of anti-democratic content. In fact, Harvard’s Berkman Klein Center found that the 2020 election misinformation around mail-in voting was a “mass media-led, elite-driven process.” However, social sites remain a primary place where groups, both pro-social and anti-democratic, can coordinate and mobilize. Classifying and moderating disallowed content is difficult; Restricting the ability of groups to mobilize is even more difficult, as content in small, closed groups can cause enormous damage.

So far, there have been some notable failures. Prior to January 6, Facebook banned the parent group “Stop the Steal” for language that spread hate and incited violence. However, they did not stop similar groups, which experienced “meteoric growth”. Overall, a 2021 analysis found 267 pages and groups, many linked to QAnon and militia organizations, “with a combined following of 32 million, spreading content that glorifies violence in the heat of the 2020 election.”

These groups on Facebook, and other platforms, were instrumental in coordinating January 6. With so many candidates still talking about rigged elections, we could see more violence after the next midterm elections. Social platforms should do everything they can to disrupt these groups and make it harder for extremists to organize violence.

What about the video?

For years, social media was primarily text and image-based platforms. Now, video is dominant. TikTok, with over a billion monthly active users, is one of the most popular social networks. YouTube, the second most visited website after Google, remains under-researched. And even Facebook, once a place designed to connect with family and friends, is shifting its focus to short-form video.

Platforms have struggled to build AI systems to moderate text-based content at scale. How will they deal with multimodal disinformation, shared as images, video and audio? Reports suggest that misinformation is rampant on TikTok, specifically around COVID-19 vaccines and the Russian invasion of Ukraine. YouTube has done a better job of tweaking its algorithm to exclude potentially harmful videos. But as the race heats up, this is a critical area to focus on.

Will the platforms share your data?

Although we’ve come a long way in our understanding of these networks, it’s hard to really know what’s going on without access to more social media data. Access currently varies widely by platform.

Facebook’s CrowdTangle tool helps us examine engagement with content, but researchers are concerned that it could be turned off at any time. Twitter has been an industry leader in data access, but the pending purchase of Elon Musk calls that access into question. Meanwhile, TikTok and YouTube share very limited data and are largely closed to journalists and researchers.

There are currently several proposals in Congress that would guarantee researchers’ access to data, and the EU has just passed landmark rules regulating big tech. Although it is too late for these bills to make data accessible during this election cycle, these are promising developments for the future.

Without a doubt, social networks are not the only ones to blame for the current state of our democracy. Larger societal forces, including a fragmented media environment, geographic classification by partisanship, and partisan manipulation have helped drive polarization in recent decades. But social media can often act as an accelerator, exacerbating our institutional shortcomings.

Heading into the midterm elections, we hope that social media executives are concerned about the threats facing our democracy, and have developed or are developing comprehensive plans to help safeguard the electoral process.

Zeve Sanderson is the CEO of NYU Center for Social Media and Policy (CSMaP). Joshua A. Tucker is one of the co-founders and co-directors of NYU Center for Social Media and Policy (CSMaP). He is Professor of Politics, Affiliate Professor of Russian and Slavic Studies, and Affiliate Professor of Data Science at New York University, as well as Director of New York University Jordan Center for Advanced Russian Studies. He is co-editor of the edited volume “Social Networks and Democracy: The State of the Countryand co-chair of the independent academic research team on the 2020 US Facebook and Instagram Election Research Study.

Leave a Comment