Facebook’s biggest threat is the law, not lawsuits

A flurry of legal challenges in the US will not fundamentally change the company in the same way that new European laws will.

Meta Platforms Inc. has become a lightning rod for legal challenges in the US, from the FTC antitrust case to shareholder lawsuits alleging the company misled investors. Last week, eight complaints were filed against the company in the US, including allegations that young people who frequently visited Instagram and Facebook committed suicide and experienced eating disorders. (Facebook has not commented on the litigation and has denied the FTC allegations and shareholder complaints.)

While the lawsuits strike at the heart of Meta’s harmful social impact and might help educate the public on the details, they likely won’t force significant change at Facebook. This is because Section 230 of the Communications Decency Act of 1996 protects Facebook and other Internet companies from liability for much of what their users post. Unless US law changes, and there are no signs of that happening anytime soon, Meta’s attorneys may continue to use that defense.

But that will not be the case in Europe. Two new laws are coming up that promise to change the way Meta’s algorithms display content to its 3 billion users. The UK’s Online Safety Act, which could come into force next year, and the European Union’s Digital Services Act, which is likely to come into force in 2024, aim to prevent psychological harm from social platforms. They will force big internet companies to share information about their algorithms with regulators, who will assess how “risky” they are.

Mark Scott, chief technology correspondent for Politico and a close follower of those laws, answered questions about how they would work, as well as what the limitations are, in Twitter Spaces with me last Wednesday. Our discussion is edited below.

Parmy Olson: What are the main differences between the upcoming UK and EU laws on online content?

Mark Scott: The EU law addresses legal but unsavory content, such as trolling, misinformation and misinformation, and tries to balance that with freedom of expression. instead of prohibiting [that content] directly, the EU will ask platforms to monitor it, conduct internal risk assessments and provide better access to data for external researchers.

The UK law will be perhaps 80% similar, with the same harmful content ban and risk assessment requirements, but it will go one step further: Facebook, Twitter and others will also be legally required to have a “duty of care”. ” for its users, which means they will have to take action against harmful but legal material.

Parmy: So, to be clear, EU law won’t require tech companies to take action against harmful content itself?

Mark: Exactly. What they require is marking it. They won’t require platforms to ban it outright.

Parmy: Would you say the UK approach is more aggressive?

Mark: It is more aggressive in terms of actions required by companies. [The UK] it has also put forward possible criminal sentences for tech executives who don’t follow these rules.

Parmy: What will risk assessments mean in practice? Will Facebook engineers have regular meetings to share their code with representatives of [UK communications regulator] Ofcom or EU officials?

Mark: They will have to show their task to the regulators and to the rest of the world. So journalists or civil society groups can also look and say, “Okay, a powerful left-leaning politician in a European country is gaining ground. Why is that? What is the risk assessment that the company has carried out to ensure [the politician’s] Is the content not blown out of proportion in a way that could damage democracy? It is that kind of boring but important work that you will focus on.

Parmy: Who will do the audit?

Mark: Risk assessments will be done internally and with independent auditors, like Price Waterhouse Coopers and Accentures of this world, or more specialized independent auditors who can say, “Facebook, this is your risk assessment and we approve of it.” And then that will be overseen by regulators. UK regulator Ofcom is hiring around 400-500 more people to do this heavy lifting.

Parmy: However, what will social media companies do differently? Because they already publish regular “transparency reports” and have gone out of their way to clean up their platforms: YouTube has demonetized problematic influencers, and the QAnon conspiracy theory no longer appears in Facebook news feeds.

Will risk assessments lead tech companies to remove more problematic content as it emerges? Will they be faster at it? Or will they make sweeping changes to their recommendation engines?

Mark: You’re right, companies have taken significant steps to weed out the worst of the worst. But the problem is that we have to trust the word of the company. When Francis Haugen made internal Facebook documents public, he showed things we never knew about the system before, like the algorithmic amplification of harmful material in certain countries. So both the UK and the EU want to codify some of the existing practices of these companies, but also make them more public. To tell YouTube: “You are doing X, Y and Z to prevent this material from spreading. Show me do not tell me.”

Parmy: So essentially what these laws will do is create more Francis Haugens, except instead of creating more whistleblowers, auditors come in and get the same type of information. Would Facebook, YouTube, and Twitter make the resulting changes globally, like they did with Europe’s GDPR privacy rules, or just for European users?

Mark: I think companies will probably say they’re going global.

Parmy: You talked about technology platforms showing their homework with these risk assessments. Do you think they will honestly share what kind of risks their algorithms could cause?

Mark: That’s a very valid point. It will all come down to the power and experience of regulators to enforce this. There is also going to be a lot of trial and error. It took about four years to smooth out the potholes for Europe’s GDPR privacy rules to kick in. I think that as regulators better understand how these companies work internally, they will know where to look better. I think initially, it won’t be very good.

Parmy: Which law will do a better job of enforcement?

Mark: The UK bill will be watered down between now and next year, when it’s expected to come into play. This means that the UK regulator will have these powers almost defined, and then they will be pulled from the rug for political reasons. The British have been very vapid in terms of how they are going to define “legal but harmful”. [content that must be taken down]. The British have also made exceptions for politicians, but as we have seen recently in the United States, some politicians are the ones who deliver some of the worst untruths to the public. So there are some big holes that need to be filled.

Parmy: Where are these laws right and where are they wrong?

Mark: I think the idea of ​​focusing on risk assessments is the best way to go. Where they have gone wrong is feeling overly optimistic that they can really fix the problem. Disinformation and politically divisive material existed long before social media. The idea that some kind of bespoke social media law can be created to fix that problem without fixing underlying cultural and social issues that go back decades, if not centuries, is a bit short-sighted. I think [British and EU] politicians have been very quick and eager to say, “Look at us, we’re fixing it.” Whereas I don’t think they’ve been clear about what they’re fixing and what outcome they’re looking for.

Parmy: Is framing these laws like risk assessments a smart way to protect free speech or is it false?

Mark: I don’t have a clear answer for you. But I think how to approach risk assessments and mitigate those risks as much as possible is the way to go. We’re not going to get rid of this, but we can at least be honest and say, “This is where we see the problems and this is how we’re going to fix them.” The specificity is missing, which provides a lot of gray space where legal fights can continue, but I also think that will happen in the next five years as legal cases are fought, and we’ll have a better idea of ​​exactly how these rules will work.

Leave a Comment